<HTML><HEAD><TITLE>Logic</TITLE>
      <STYLE>
      BODY,H1,H2,H3,H4,H5,H6,P,CENTER,TD,TH,UL,DL,DIV,PRE
		   {
	      font-family: Lucida Sans Unicode, Geneva, Arial, Helvetica, sans-serif;
         }
		
      DIV.BLOCK
         {
         color:black;
         border:solid thin black;
         padding:1.0em 1.8em;
         }
		
      DIV.BLOCK DIV.BLOCK
         {
         color:black;
         border:solid thin black;
         padding:1.0em 1.8em;
         margin:1.5em
         }
      
      DIV.GRAYBLOCK
         {
         background:#E0E0E0;
         color:black;
         border:solid thin black;
         padding:1.0em 1.8em;
         }
		
      DIV.GRAYBLOCK DIV.GRAYBLOCK
         {
         background:#E0E0E0;
         color:black;
         border:solid thin black;
         padding:1.0em 1.8em;
         margin:1.5em
         }
      </STYLE>
</HEAD><BODY><HR><HR><A NAME="1"></A><H2>1.  Welcome </H2><UL>[ <I> Welcome </I> ]</UL>
<H4>Message</H4>                

   <P> Welcome to The Web Outline of Logic.  I hope you find it a resource for reference and learning.  If you find any mistakes, omissions, inaccuracies or just want to make a general comment or suggestion please <A HREF="mailto:ronpro@cox.net" TARGET=_top> email me </A>.

   </P> <P> Thank you and please come back often.  The Web Outline of Logic is always evolving.

</P>  <H4>How To Use</H4>                

   <P> This web page is divided into multiple frames.  The pane on the left contains a navigation tree, while the one on the right (the pane containing this text you are now reading) is the content pane.

   </P> <P> The navigation tree is an outline of articles in this web site.  When you first load this web site, the navigation tree is completely collapsed (meaning you can only see the top-level articles).  Clicking on a [+] will expand one level of the tree while clicking on a [-] will collapse the subtree.  The article titles in the tree which are underlined can be clicked.  This will cause the article to appear in the content frame.

   </P> <P> Every article in the outline is contained in the content pane.  It's possible to scroll through all articles in this one frame.  However, this is not reccommended.  The outline in the navigation frame has many levels and it's easy to loose the context of any particular article without using the navigation tree.  However, on the chance that you do loose your context, it's still possible to locate yourself.  At the top of every article is a 'path' which shows your location in the tree.  Each branch in this path is separated by a double colon (::).

</P>  <H4>Acknowledgements</H4>                

   <P> This site is powered by <A HREF="http://www.natara.com" TARGET=_top> Natara Bonsai </A> and my own <A HREF="http://www.python.org" TARGET=_top> python </A> utility to convert Bonsai files into this web page.<BR>

<a href="http://www.natara.com/">
<img src="img/BonsaiSm.gif" align=top width=151
     height=122 alt="[Bonsai Powered]" border=0>
</a>
<a href="http://www.python.org/">
<img src="img/PythonPoweredAnim.gif" align=top width=110
     height=44 alt="[Python Powered]" border=0>
</a>
<a href="http://wingware.com/">
<img src="img/coded_w_wing_medium.png" align=top width=145
     height=40 alt="[Bonsai Powered]" border=0>
</a>  

 </P>  
<HR><HR><A NAME="2"></A><H2>2.  Organization </H2><UL>[ <I> Welcome :: Organization </I> ]</UL>
<H4>Description</H4>                

   <P> The criteria for the organization of this outline are first as a reference and second as a tool for learning. The outline begins with a simple definition of Logic as the study and evaluation of arguments. The next topic of the outline is that of arguments.  The primary goal of this section is to introduce the argument from the perspective of the definition of logic provided.  That is, the focus of the section is to give the reader all that's needed to know how to identify and evaluate an argument.  As added material, the last part of the section on argumentation is on argumentation in the real world. Most courses on logic will either skip this part altogether or else just study the subsection on Fallacies.  The fourth section of the outline is a detailed study of the proposition (the first component of an argument).  The fifth section is a detailed study of inference (the second component of an argument).  After that the outline enters into the more advanced topics of metatheory, philosophy of logic, and a study of non-classical logical systems.

   </P> <P> Future versions of this outline will include syllibi for different courses of study.  

 </P>  
<HR><HR><A NAME="3"></A><H2>3.  About </H2><UL>[ <I> Welcome :: About </I> ]</UL>
<H4></H4><UL>            

   <LI> Version:  2008-03-06

   </LI> <LI> Number of Articles:  1000  

 </LI> </UL> 
<HR><HR><A NAME="4"></A><H2>4.  Revision Log </H2><UL>[ <I> Welcome :: Revision Log </I> ]</UL>
<H4>2007.02.07</H4>                

   <P> Manu sections of the outline have been flushed out.  I'm now moving the outline in a direction in which it is not so deeply nested.  Several corrections have been made.  Many of the articles have been flushed out with clearer text.

</P>  <H4>2005.03.19</H4>                

   <P> Until now, the outline has been organized as a system of reference.  This was not a place for someone wishing to learn something about logic.  I am not in the process of creating some continuity among the articles with the purpose of making the outline more useful to the casual student of logic.  However, the outline is first a reference tool.  For any cases which create conflict between the 'outline as a reference' and the 'outline as a learning tool', the first of these criteria will always win out.

</P>  <H4>2005.03.01</H4>                

   <P> Upgrade of The Logic Web to a fully navigable outline.  Welcome to "The Web Outline of Logic".  

 </P>  
<HR><HR><A NAME="5"></A><H2>5.  Logic </H2><UL>[ <I> Logic </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

<DIV CLASS="BLOCK">
   <P> Logic is the science that studies and evaluates arguments.
</P>

</DIV>  <H4>Section Overview</H4>                

   <P> The first section, <I>Argument</I>, will give you a good understanding of what this definition entails.  It will lead you from identifying and knowing the parts of an argument, to evaluating an argument.

   </P> <P> The remaining sections will expand upon complex topics mentioned in the <I>Argument</I> section.
  

 </P>  
<HR><HR><A NAME="6"></A><H2>6.  Argument </H2><UL>[ <I> Argument </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A sequence of statements of which one is intended as a conclusion (or <I>finding</I>) and the others, the premises, are claimed to prove or at least provide some evidence for the conclusion.

</P>  <H4>See Also</H4><UL>            

   <LI> definition of <A HREF="#1284" TARGET="baseframe">consequence</A>  

 </LI> </UL> 
<HR><HR><A NAME="7"></A><H2>7.  Analysis </H2><UL>[ <I> Argument :: Analysis </I> ]</UL>
<H4>Description</H4>                

   <P> Analysis begins with Identification.  We need to know we have an argument before we can examine it.  Identification of an argument can often be difficult.  Many arguments are written as if the author were stating raw facts; conversely, many forms of discourse which may seem to be arguments are in fact not.  However, there are clues as to what is an argument.

   </P> <P> This section begins by describing the various components of an argument; that is the things that all arguments have.  Then goes on to discussing how to identifying arguments from a discourse.  Next is classifying arguments and finally evaluation of arguments.

</P>  <H4>See Also</H4><UL>            

   <LI> definition of <A HREF="#1277" TARGET="baseframe">analyze</A>  

 </LI> </UL> 
<HR><HR><A NAME="8"></A><H2>8.  Components </H2><UL>[ <I> Argument :: Analysis :: Components </I> ]</UL>
<H4>Description</H4>                

   <P> There are two building blocks of arguments, propositions and inference.  In turn, within the context of arguments, there are two roles for propositions, premise and conclusion.  All arguments contain premises, a conclusion and inference.  

 </P>  
<HR><HR><A NAME="9"></A><H2>9.  Propostion </H2><UL>[ <I> Argument :: Analysis :: Components :: Propostion </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

<DIV CLASS="BLOCK">
   <P> A claim, usually expressed as a declarative sentence, that is either true or false.
</P>

</DIV>  <H4>Notes</H4><UL>            

   <LI> Propositions are the fundamental unit (building block) of arguments.

   </LI> <LI> A proposition is not fixed to one particular sequence of words or even one language.  The same proposition may be expressed in many different ways.

   </LI> <LI> A proposition can be classified by its role in an argument, as a ((I:premise</LI> or a <I>conclusion</I>.

</UL>  <H4>Examples</H4><UL>            

   <DIV CLASS="GRAYBLOCK">
   <LI> Dogs do not fly. -- <I> true</I>

   </LI> <LI> Snow is red. -- <I> false</I>

   </LI> <LI> ET.'s exist. -- <I> unknown, but is either true or false.</I>
   </LI>

</DIV> </UL> <H4>See Also</H4><UL>            

   <LI> definition of <A HREF="#1280" TARGET="baseframe">assert</A>

   </LI> <LI> definition of <A HREF="#1275" TARGET="baseframe">affirm</A>

   </LI> <LI> definition of <A HREF="#1286" TARGET="baseframe">deny</A>

   </LI> <LI> definition of <A HREF="#1279" TARGET="baseframe">assent</A>

   </LI> <LI> definition of <A HREF="#1288" TARGET="baseframe">dissent</A>  

 </LI> </UL> </BODY></HTML>
<HR><HR><A NAME="10"></A><H2>10.  Premise </H2><UL>[ <I> Argument :: Analysis :: Components :: Propostion :: Premise </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A proposition, in the context of an argument, whose role is to prove or at least provide some evidence for the conclusion.

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
   Premise1:	All humans are mortal.
   Premise2:	Socrates is human.

   Conclusion:	Socrates is mortal.
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="11"></A><H2>11.  Conclusion </H2><UL>[ <I> Argument :: Analysis :: Components :: Propostion :: Conclusion </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A proposition, in the context of an argument, whose role is to state the "finding".  It's the proposition which states what's being argued, justified or proven.

</P>  <H4>Notes</H4>                

   <P> The conclusion is often said to 'follow' from the premises.  

 </P>  
<HR><HR><A NAME="12"></A><H2>12.  Inference </H2><UL>[ <I> Argument :: Analysis :: Components :: Inference </I> ]</UL>

<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The reasoning process expressed by an argument.

</P>  <H4>Notes</H4><UL>            

   <LI> Inference is the glue which binds the propositions of an argument.  

 </LI> </UL> 
<HR><HR><A NAME="13"></A><H2>13.  Identification </H2><UL>[ <I> Argument :: Analysis :: Identification </I> ]</UL>
<H4>Description</H4>                

   <P> Analysis of an argument begins with Identification.  We need to know we have an argument before we can examine it.  The way to identify an argument is to find all the components of an argument in a contiguous piece of discourse.  Identification of an argument can often be difficult.  Many arguments are written as if the author were stating raw facts; conversely, many forms of discourse which may seem to be arguments are in fact not.  However, there are clues as to what is an argument.  Perhaps the easiest way is by recognizing the presence of the components.
  

 </P>  
<HR><HR><A NAME="14"></A><H2>14.  Indicators </H2><UL>[ <I> Argument :: Analysis :: Identification :: Indicators </I> ]</UL>
<H4>Description</H4>                

   <P> An indicator is a word or phrase that clues us in that an argument is being stated.  There are two kinds of indicators; those that indicate premises and those that indicate conclusions.  The following lists provide the most commonly used indicators.  

 </P>  
<HR><HR><A NAME="15"></A><H2>15.  Premise Indicators </H2><UL>[ <I> Argument :: Analysis :: Identification :: Indicators :: Premise Indicators </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A word or phrase used to introduce a premise.

</P>  <H4>Examples</H4>                

   <P> In the following, the letter P is used to mark most common ordering of the premise to the associated indicator.

</P>  <H4></H4><UL TYPE=SQUARE>
   <LI> as P
   </LI> <LI> as indicated by P
   </LI> <LI> as shown by the fact that P
   </LI> <LI> assuming that P
   </LI> <LI> because P
   </LI> <LI> for P
   </LI> <LI> for the reason that P
   </LI> <LI> given that P
   </LI> <LI> granted that P
   </LI> <LI> in that P
   </LI> <LI> in view of the fact that P
   </LI> <LI> inasmuch as P
   </LI> <LI> it is a fact that P
   </LI> <LI> may be inferred from P
   </LI> <LI> one cannot doubt that P
   </LI> <LI> owing to P
   </LI> <LI> seeing that P
   </LI> <LI> since P
   </LI> <LI> the reason is that P
   </LI> <LI> this is true because P  

 </LI> </UL> 
<HR><HR><A NAME="16"></A><H2>16.  Conclusion Indicators </H2><UL>[ <I> Argument :: Analysis :: Identification :: Indicators :: Conclusion Indicators </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A word or phrase used to introduce a conclusion.

</P>  <H4>Examples</H4>                

   <P> In the following, the letter C is used to mark most common ordering of the conclusion to the associated indicator.

</P>  <H4></H4><UL TYPE=SQUARE>
   <LI> accordingly C
   </LI> <LI> as a result C
   </LI> <LI> consequently C
   </LI> <LI> as a consequence C
   </LI> <LI> as a logical consequence C
   </LI> <LI> entails that C
   </LI> <LI> for this reason C
   </LI> <LI> from which we can infer that C
   </LI> <LI> hence C
   </LI> <LI> we may conclude C
   </LI> <LI> in conclusion C
   </LI> <LI> it follows that C
   </LI> <LI> it follows necessarilly that C
   </LI> <LI> so C
   </LI> <LI> the moral is C
   </LI> <LI> therefore C
   </LI> <LI> this being so C
   </LI> <LI> thus C
   </LI> <LI> we may conclude C
   </LI> <LI> we may infer C
   </LI> <LI> wherefore C
   </LI> <LI> which means that C
   </LI> <LI> which proves that C  

 </LI> </UL> 
<HR><HR><A NAME="17"></A><H2>17.  Hidden </H2><UL>[ <I> Argument :: Analysis :: Identification :: Hidden </I> ]</UL>
<H4>Description</H4>                

   <P> When there are no indicators.  It's necessary to analize what the author is saying.  The following may help to provide some assistence in locating the conclusion.  

 </P>  
<HR><HR><A NAME="18"></A><H2>18.  Thesis-Argument </H2><UL>[ <I> Argument :: Analysis :: Identification :: Hidden :: Thesis-Argument </I> ]</UL>
<H4>Description</H4>                

   <P> When an argumentative stance is taken, the thesis is usually one of the first things stated.  The thesis IS the conclusion; it's what the author intends to prove in the text that follows.  

 </P>  
<HR><HR><A NAME="19"></A><H2>19.  Analytical questions </H2><UL>[ <I> Argument :: Analysis :: Identification :: Hidden :: Analytical questions </I> ]</UL>
<H4>Description</H4>                

   <P> When there are no indicators and Simple structure-analysis fails.  It's necessary to analize what the author is saying.  The following questions will help in that task and also in identifying the parts of the argument.

</P>  <H4>Questions</H4><UL>            

   <LI> What single statement is claimed (implicitly) to follow from the others?

   </LI> <LI> What is the arguer trying to prove?

   </LI> <LI> What is the main point in the passage?  

 </LI> </UL> 
<HR><HR><A NAME="20"></A><H2>20.  Classification </H2><UL>[ <I> Argument :: Analysis :: Classification </I> ]</UL>
<H4>Description</H4>                

   <P> All arguments are divided into one of two mutually exclusive categories.  Logicians use vastly different techniques to study the arguments of these two categories.  Thus, Logic itself is divided into two disciplines which take their names from the categories.  

 </P>  
<HR><HR><A NAME="21"></A><H2>21.  Deduction </H2><UL>[ <I> Argument :: Analysis :: Classification :: Deduction </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> An argument in which it is impossible for the conclusion to be false while the premises are all true.

</P>  <H4>Examples</H4><UL>            

   <LI> <DIV CLASS="GRAYBLOCK">All men are mortal.<BR>
   Socrates is a man.<BR>
   therefore, socrates is a mortal.</DIV>

   </LI> <LI> If it's raining then the ground is wet.<BR>
   It's raining.<BR>
   So, the ground is wet.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Deduction deals with arguments of propositions which are all claimed to have maximal inductive probabilities (i.e. all probabilites are 1.0).

</LI> </UL> <H4>See Also</H4><UL>            

   <LI> <A HREF="#275" TARGET="baseframe">Deduction</A>  

 </LI> </UL> 
<HR><HR><A NAME="22"></A><H2>22.  Induction </H2><UL>[ <I> Argument :: Analysis :: Classification :: Induction </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> An argument in which it is possible for the conclusion to be false while all the premises are true.

</P>  <H4>Examples</H4><UL>            

   <LI> Most college students are liberal minded.<BR>
   Tom is a college student.<BR>
   Therefore, Tom is liberal minded.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> For every inductive argument at least one of the following holds:  1)  At least one premise has a probability (likelihood) associated with it (it is neither absolutely true nor absolutely false such as the premises of a deductive argument).  2)  There are unstated premises.

   </LI> <LI> In an inductive argument, the arguer is making the claim that although it's possible for the conclusion to be false, given the premises, it's very improbable.

   </LI> <LI> If an argument is missing or has unstated premises it is always inductive.

</LI> </UL> <H4>See Also</H4><UL>            

   <LI> <A HREF="#823" TARGET="baseframe">Induction</A>  

 </LI> </UL> 
<HR><HR><A NAME="23"></A><H2>23.  Tools </H2><UL>[ <I> Argument :: Analysis :: Tools </I> ]</UL>
<H4>Description</H4>                

   <P> For the analysis, and ultimately the evaluation of arguments, logicians have developed various tools to help better understand an argument's content, its physical structure and its inferrential structure (train of thought).  

 </P>  
<HR><HR><A NAME="24"></A><H2>24.  Graphical </H2><UL>[ <I> Argument :: Analysis :: Tools :: Graphical </I> ]</UL>
<H4>Description</H4>                

   <P> Most tools for studying an argument are specific to the kind of argument, inductive or deductive.  These graphical tools, while limited in the kinds of information they can provide about an argument, will work with all kinds of arguments.  

 </P>  
<HR><HR><A NAME="25"></A><H2>25.  Standard Form </H2><UL>[ <I> Argument :: Analysis :: Tools :: Graphical :: Standard Form </I> ]</UL>
<H4>Description</H4>                

   <P> The standard form is perhaps the simplest tool for working with arguments.  It's useful for simply understanding an argument (which can sometimes be difficult due to complex or tricky wording). It is also useful for examining an argument to classify it as either deductive or inductive.

   </P> <P> The standard form is simple:  It's one proposition per line.  First list the fundamental premises, then any derived premises and finally the conclusion.  It's customary to place a triangle of three dots (<B>&#8756;</B>), read 'therefore', in front of the conclusion.

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
	All humans are mortal.
	Socrates is human.
   <B>&#8756;</B>	Socrates is mortal.
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Some people prefer to draw a horizontal separating premises from conclusion, as if adding a column of numbers.  In this case the 'therefore' symbol is often omitted.  

 </LI> </UL> 
<HR><HR><A NAME="26"></A><H2>26.  Diagrams </H2><UL>[ <I> Argument :: Analysis :: Tools :: Graphical :: Diagrams </I> ]</UL>
<H4>Description</H4>                

   <P> Standard form is usually sufficient for working with simple argument of just two or three premises.  However, it's less useful when working with a larger number of premises and a complex inferrential structure (an argument involving intermediate conclusions).  For such an argument, an <B>argument diagram</B> is often more useful.

   </P> <P> To begin, each proposition of the argument is numbered.  Then the diagram is drawn according to four rules.

</P>  <H4>Notes</H4><UL>            

   <LI> Form this set of rules we see the four possible ways in which conclusions can be drawn from premises.  That is, there are four possible premise-conclusion relationship patterns.  

 </LI> </UL> 
<HR><HR><A NAME="27"></A><H2>27.  Vertial Pattern (1:1) </H2><UL>[ <I> Argument :: Analysis :: Tools :: Graphical :: Diagrams :: Vertial Pattern (1:1) </I> ]</UL>
<H4>Diagram</H4>                

   <P> A circled number j is placed above a cirled number k.  An arrow is drawn from j to k.

</P>  <H4>Semantics</H4>                

   <P> A single premise (j) leads to a single (possibly intermediate) conclusion (k).  

 </P>  
<HR><HR><A NAME="28"></A><H2>28.  Independent Premises (n:1) </H2><UL>[ <I> Argument :: Analysis :: Tools :: Graphical :: Diagrams :: Independent Premises (n:1) </I> ]</UL>
<H4>Diagram</H4>                

   <P> The circled numbers for the premises P1 through Pn are placed in a row left to right.  The circled number for the conclusion C is placed centered below the row of premises.  For each premise Pi, an arrows is drawn from Pi to C.

</P>  <H4>Semantics</H4>                

   <P> Multiple premises (P1) to (Pn) lead to a single conclusion C.  

 </P>  
<HR><HR><A NAME="29"></A><H2>29.  Conjoint Premises (n:1) </H2><UL>[ <I> Argument :: Analysis :: Tools :: Graphical :: Diagrams :: Conjoint Premises (n:1) </I> ]</UL>
<H4>Diagram</H4>                

   <P> The circled numbers for the premises P1 through Pn are placed in a row left to right, then collectively underscored by a 'lazy brace' pointing down.  The circled number for the conclusion C is placed centered below the row of premises.  An arrow is drawn from the tit of the brace to C.

</P>  <H4>Semantics</H4>                

   <P> Multiple premises P1 through Pn conjointly support a single conclusion C.  

 </P>  
<HR><HR><A NAME="30"></A><H2>30.  Multiple Conclusion (1:n) </H2><UL>[ <I> Argument :: Analysis :: Tools :: Graphical :: Diagrams :: Multiple Conclusion (1:n) </I> ]</UL>
<H4>Diagram</H4>                

   <P> The circled number for the premise P is placed on top.  The circled numbers for the conclusions C1 through Cn are place below from left to right then collectively overscored by a 'lazy brace' pointing up.  An arrow is drawn from P to the tit of the brace.

</P>  <H4>Semantics</H4>                

   <P> A single premise P supports multiple conclusions C1 through Cn.  

 </P>  
<HR><HR><A NAME="31"></A><H2>31.  Symbolic </H2><UL>[ <I> Argument :: Analysis :: Tools :: Symbolic </I> ]</UL>
<H4>Description</H4>                

   <P> While graphical tools study the inferrential structure of arguments, symbolic tools study patterns of arguments.  

 </P>  
<HR><HR><A NAME="32"></A><H2>32.  Informal </H2><UL>[ <I> Argument :: Analysis :: Tools :: Symbolic :: Informal </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Critical Thinking

</LI> </UL> <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The study of particular arguments in their natural-language contexts.

</P>  <H4>Notes</H4><UL>            

   <LI> These are arguments written or spoken in a native language such as English.  

 </LI> </UL> 
<HR><HR><A NAME="33"></A><H2>33.  Formal </H2><UL>[ <I> Argument :: Analysis :: Tools :: Symbolic :: Formal </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Symbolic Logic

</LI> </UL> <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The study of argument forms.  Formal logic studies patterns of reasoning rather than particular arguments.

</P>  <H4>Examples</H4><UL>            

   <LI> Informal:
<PRE>
	If it's raining then the car is wet.
	It's raining.
   <B>&#8756;</B>	The car is wet.
</PRE>

   </LI> <LI> Formal:
<PRE>
	if P then Q.
	P.
   <B>&#8756;</B>	Q.
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> The theory behind formal logic is that an argument is valid if it follows a form that is considered valid.

   </LI> <LI>  An informal argument or proposition that follows the form of a formal argument or proposition is said to be an 'instance' of the formal counterpart.  

 </LI> </UL> 
<HR><HR><A NAME="34"></A><H2>34.  Evaluation </H2><UL>[ <I> Argument :: Evaluation </I> ]</UL>
<H4>Description</H4>                

   <P> The purpose of an argument is to get to the truth.  A factual conclusion is the golden nugget.  In evaluating an argument, the goal is to determine how much confidence we can put in the conclusion.  How certain we can be that what we've found is not just fool's gold.
  

 </P>  
<HR><HR><A NAME="35"></A><H2>35.  Criteria </H2><UL>[ <I> Argument :: Evaluation :: Criteria </I> ]</UL>
<H4>Description</H4>                

   <P> There are two criteria against which we evaluate a conclusion.
  

 </P>  
<HR><HR><A NAME="36"></A><H2>36.  Truth </H2><UL>[ <I> Argument :: Evaluation :: Criteria :: Truth </I> ]</UL>
<H4>Description</H4>                

   <P> The probability that the conclusion is in fact true.  Ideally, this probability is 100% (absolutely true).
  

 </P>  
<HR><HR><A NAME="37"></A><H2>37.  Invulnerability </H2><UL>[ <I> Argument :: Evaluation :: Criteria :: Invulnerability </I> ]</UL>
<H4>Description</H4>                

   <P> How well will the conclusion stand up to attack and the advancement of new evidence?  Ideally, we want an argument such that no new evidence will affect the conclusion.
  

 </P>  
<HR><HR><A NAME="38"></A><H2>38.  Claims </H2><UL>[ <I> Argument :: Evaluation :: Claims </I> ]</UL>
<H4>Description</H4>                

   <P> A presenter of an argument makes three claims about the argument.  

 </P>  
<HR><HR><A NAME="39"></A><H2>39.  Inferential Claim </H2><UL>[ <I> Argument :: Evaluation :: Claims :: Inferential Claim </I> ]</UL>

<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The claim that there are no errors in the reasoning process used to derive the conclusion.

</P>  <H4>Notes</H4><UL>            

   <LI> The bulk of logic is concerned with this claim.

</LI> </UL> <H4>See Also</H4><UL>            

   <LI> <A HREF="#12" TARGET="baseframe">Inference</A>
  

 </LI> </UL> 
<HR><HR><A NAME="40"></A><H2>40.  Factual Claim </H2><UL>[ <I> Argument :: Evaluation :: Claims :: Factual Claim </I> ]</UL>

<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The claim that the premises are true.  The audience must agree with the premises, in order to accept the conclusion.  If there is doubt about any premises, the arguer then has the further burden of proof for the premises in doubt.

   </P> <P> Determination of the truth of particular propositions is <I>not</I> within the realm of logic, but rather in the realm of the topic being discussed.  

 </P>  
<HR><HR><A NAME="41"></A><H2>41.  Relevance Claim </H2><UL>[ <I> Argument :: Evaluation :: Claims :: Relevance Claim </I> ]</UL>
<H4>Description</H4>                

   <P>The claim that the premises are relevant to the conclusion.

</P>  <H4>Notes</H4><UL>            

   <LI> It may seem that the claim of relevance is implicitly one aspect of the inferential claim, and indeed it probably is for the lay definition.  However, the classical theory of inference does not consider relevance, therefore we list it here as a third claim.  

 </LI> </UL> 
<HR><HR><A NAME="42"></A><H2>42.  Assessment </H2><UL>[ <I> Argument :: Evaluation :: Assessment </I> ]</UL>
<H4>Description</H4>                

   <P> In evaluating an argument we consider the <I>claims</I> as requirments for a good argument.  The better an argument fulfills those requirements (claims), the more confidence we can have in the conclusion.

   </P> <P> The evaluative criteria for a conclusion are assessed indirectly as a function of other more easily obtainable pieces of information:  the type of argument (inductive or deductive) and the truth of the factual and inferential claims.

   </P> <P> To assess an argument from the classical point of view, first determine if it's deductive or inductive.  Next determine if the argument fulfills the factual and inferential claims.  Now use the table below to determine what kind of argument you have.

<PRE>
Inferential	Factual		Assessment
Claim	Claim		deductive	inductive

no	---		invalid	weak
yes	---		valid	strong
yes	yes		sound	cogent
</PRE>

   </P> <P> For the first and second lines, the factual claim doesn't even come into play.  Given this, all sound arguments are valid and all cogent arguments are strong.  Since the factual claim doesn't come into play, we can only state for valid and strong arguments that the conclusion is true if the premises are true.  All we know is that the reasoning is good.

   </P> <P> <B>Invalid</B> and <B>weak</B> arguments are usually not worth bothering with in terms of our confidence in the conclusion.  Such arguments may be fruitful in helping to construct better arguments or, in the case of a debate between individuals, such arguments provide much material for exposure of the reasoning flaws.

   </P> <P> <B>Valid</B> and <B>strong</B> arguments provide only slightly more confidence in their conclusion.  While the truth of the conclusion is not known, if the premises are at least <I>reasonable</I>, then the conclusion is also reasonable.  In the case of debate, point out or challenge the bad premise(s).

   </P> <P> <B>Sound</B> and <B>cogent</B> arguments are what we strive for.  Bottom line, a sound argument is invulnerable to attack.  Cogent arguments may be attacked depending upon how the premises and concluding propositions are stated.  All inductive arguments are vulnerable to new evidence, though a cogent argument is less so.

</P>  <H4>Notes</H4><UL>            

   <LI> Since this assessment of arguments is classical, it does not consider the relevance of the premises to the conclusion.  It is possible to have a sound argument with irrelevant premises.  In this case, the argument can still be attacked.  In practice an argument's premises must also be relevant to the conclusion to minimize attack.
  

 </LI> </UL> 
<HR><HR><A NAME="43"></A><H2>43.  Invalid </H2><UL>[ <I> Argument :: Evaluation :: Assessment :: Deductive Arguments :: Invalid </I> ]</UL>
<H4>Description</H4>                

   <P> A deductive argument with reasoning errors.  

 </P>  
<HR><HR><A NAME="44"></A><H2>44.  Valid </H2><UL>[ <I> Argument :: Evaluation :: Assessment :: Deductive Arguments :: Valid </I> ]</UL>
<H4>Description</H4>                

   <P> A deductive argument that is free from reasoning errors.  

 </P>  
<HR><HR><A NAME="45"></A><H2>45.  Sound </H2><UL>[ <I> Argument :: Evaluation :: Assessment :: Deductive Arguments :: Sound </I> ]</UL>
<H4>Description</H4>                

   <P> A valid argument whose premises are all true.  Sound arguments are invulnerable to the introduction of additional evidence.  They can only be refuted by attacking the truth of the premises.  

 </P>  
<HR><HR><A NAME="46"></A><H2>46.  Weak </H2><UL>[ <I> Argument :: Evaluation :: Assessment :: Inductive Arguments :: Weak </I> ]</UL>
<H4>Description</H4>                

   <P> An inductive argument with reasoning errors.  

 </P>  
<HR><HR><A NAME="47"></A><H2>47.  Strong </H2><UL>[ <I> Argument :: Evaluation :: Assessment :: Inductive Arguments :: Strong </I> ]</UL>
<H4>Description</H4>                

   <P> An inductive argument that is free from reasoning errors.  

 </P>  
<HR><HR><A NAME="48"></A><H2>48.  Cogent </H2><UL>[ <I> Argument :: Evaluation :: Assessment :: Inductive Arguments :: Cogent </I> ]</UL>
<H4>Description</H4>                

   <P> A strong argument with all true premises.  Cogent arguments are not invulnerable to attack or the introduction of new information, they only state that their conclusion is likely given the information known.  

 </P>  
<HR><HR><A NAME="49"></A><H2>49.  Refutation </H2><UL>[ <I> Argument :: Refutation </I> ]</UL>
<H4>Description</H4>                

   <P> Once an argument is evaluated it can be refuted  

 </P>  
<HR><HR><A NAME="50"></A><H2>50.  Exposition </H2><UL>[ <I> Argument :: Refutation :: Exposition </I> ]</UL>
<H4>Description</H4>                

   <P> The most common technique of refuting an argument is to expose a flaw.  

 </P>  
<HR><HR><A NAME="51"></A><H2>51.  Point out a fallacy </H2><UL>[ <I> Argument :: Refutation :: Exposition :: Point out a fallacy </I> ]</UL>
<H4>Description</H4>                

   <P> Show what's wrong with the argument.  One way to do this is to point out the fallacy.  However, most people don't know fallacy names so a much better technique is to demonstrate how the argument is flawed (which is often a second argument).  

 </P>  
<HR><HR><A NAME="52"></A><H2>52.  By Proof of Nonconsequence </H2><UL>[ <I> Argument :: Refutation :: Exposition :: By Proof of Nonconsequence </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> An argument proving that a conclusion <I><B>does not</B></I> or may not follow from a set of premises.  

 </P>  
<HR><HR><A NAME="53"></A><H2>53.  Counter-Example </H2><UL>[ <I> Argument :: Refutation :: Exposition :: By Proof of Nonconsequence :: Counter-Example </I> ]</UL>
<H4>Description</H4>                

   <P> A counter-example is the simplest way to demonstrate non-consequence.  To refute an argument by counter-example, just show case where the premises can be true while the conclusion is false.

</P>  <H4>Examples</H4><UL>            

   <LI> Argument:
<PRE>
 	All romances are literature.
	All fiction is literature.
   <B>&#8756;</B>	All romances are fiction.
</PRE>

   </LI> <LI> Form:
<PRE>
	All A are B.
	All C are B.
   <B>&#8756;</B>	All A are C.
</PRE>

   </LI> <LI> Counterexample:
<PRE>
	All cats are animals.
	All dogs are animals.
   <B>&#8756;</B>	All cats are dogs.
</PRE>

</LI> </UL> <H4>Notes</H4>                

   <P> In applying counterexamples to expose an invalid categorical syllogism it is useful to keep in mind the following set of terms: cats, dogs, mammals, fish and animals.  

 </P>  
<HR><HR><A NAME="54"></A><H2>54.  Fallacies </H2><UL>[ <I> Argument :: Refutation :: Fallacies </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A fallacy is a flaw of an erronious or deceptive argument.

</P>  <H4>Description</H4>                

   <P> Over time, logicians have catalogued many of the kinds of fallacies.  This section presents a catalog organized roughly according to the four claims.  Each fallacy is presented in the following format.

<PRE>
	<B>Alternate Names</B>
	<I>Other names by which this fallacy is known.</I>

	<B>Description</B>
	<I>A description of the fallacy.</I>

	<B>Examples</B>
	<I>Simple examples of the fallacy in use.</I>

	<B>Exposition</B>
	<I>Hints for exposing the fallacy.</I>
</PRE>  

 </P>  
<HR><HR><A NAME="55"></A><H2>55.  Fallacies of Arguments </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments </I> ]</UL>
<H4>Description</H4>                

   <P> This section is a catalog of fallacies.  They are organized according to which claim they fail to meet.  Each fallacy in the catalog begins with a list of alternate names that it's known by (if any), then a short description, followed by some examples of the fallacy, and finally a list of techniques for exposing the fallacy.  If the exposition section is not present the fallacy can probably be exposed with a counter-example, this is particularly true for most fallacies relating to form.
  

 </P>  
<HR><HR><A NAME="56"></A><H2>56.  Factual Claim </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim </I> ]</UL>
<H4>Description</H4>                

   <P> Fallacies which call into question the truth of one or more questions.  

 </P>  
<HR><HR><A NAME="57"></A><H2>57.  Factual Problems </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Factual Problems </I> ]</UL>
<H4>Description</H4>                

   <P> Premises which give rise to questions about how well they fulfill the factual claim.  This includes items which are only partially factual and those that spark concerns in the way they present the facts.

   </P> <P> For an argument to be convincing, one major component is that the audience accepts the premises as true.  These premises raise questions about the factual nature of one or more premises.  

 </P>  
<HR><HR><A NAME="58"></A><H2>58.  False Fact </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Factual Problems :: False Fact </I> ]</UL>
<H4>Description</H4>                

   <P> A premise is not factual.  

 </P>  
<HR><HR><A NAME="59"></A><H2>59.  Fake Precision </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Factual Problems :: Fake Precision </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> False Precision

   </LI> <LI> Misplaced Precision

   </LI> <LI> Spurious Accuracy

</LI> </UL> <H4>Description</H4>                

   <P> A common form of false premise.  Occurs when an argument treats information as more precise than it really is.  This happens when imprecise information contained in the premises must be taken as precise in order to adequately support the conclusion.  

 </P>  
<HR><HR><A NAME="60"></A><H2>60.  Unestablished </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Factual Problems :: Unestablished </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Untestable

</LI> </UL> <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The theory advanced to explain why some phenomen occurs has not been established or cannot be tested.

</P>  <H4>Examples</H4><UL>            

   <LI> Aircraft in the mid-Atlantic disappear because of the effect of the Bermuda Triangle, a force so subtle it cannot be measured on any instrument.

   </LI> <LI> I won the lottery because my psychic aura made me win.

   </LI> <LI> The reason why everything exists is that god created it.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the theory.  Show that it makes no predictions, or that the predictions it does make cannot ever be wrong, even if the theory is false.  

 </P>  
<HR><HR><A NAME="61"></A><H2>61.  Listener Doubt/Disagreement </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Factual Problems :: Listener Doubt/Disagreement </I> ]</UL>
<H4>Description</H4>                

   <P> Not really a fallacy, but an argument is pretty much meaningless if your listener doesn't belive your premises.  

 </P>  
<HR><HR><A NAME="62"></A><H2>62.  Hearsay </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Factual Problems :: Listener Doubt/Disagreement :: Hearsay </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The arguer uses as a premise information they got from someone else.  This fallacy typically occurs in an argument where someone is reasoning from events that have happened.  The problem with this is that the information is second-hand.  We don't know if the arguer got the information exactly right, we are unable to attack the premise effectively simply because the arguer cannot fairly defend it.  

 </P>  
<HR><HR><A NAME="63"></A><H2>63.  Formal Problems </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Formal Problems </I> ]</UL>
<H4>Description</H4>                

   <P> These fallacy involves either premises of multiple propositions or a single compound proposition.  In either case, no particular fact is wrong, however the way the propositions are assembled (usually through the use of logical operaion, 'and', 'or', 'if...then') leads to a fallacious statement or implication.  

 </P>  
<HR><HR><A NAME="64"></A><H2>64.  Inconsistency </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Formal Problems :: Problems with 'AND' :: Inconsistency </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Contradictory Premises

</LI> </UL> <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Illegitimate use of the 'and' operator.  The arguer asserts a premise that cannot be true because it contradicts itself, or a set of premises that cannot all be true because they contradict each other.  In such a case, the propositions may be contradictories or they may be contraries.

</P>  <H4>Examples</H4><UL>            

   <LI> Montreal is about 200km from Ottawa, while Toronto is 400km from Ottawa.  Toronto is closer to Ottawa than Montreal.

   </LI> <LI> John is taller than Jake, and Jake is taller than Fred, while Fred is taller than John.

</LI> </UL> <H4>Exposition</H4>                

   <P> Assume that one of the statements is true, and then use it as a premise to show that one of the other statements is false.  

 </P>  
<HR><HR><A NAME="65"></A><H2>65.  Plurium Interrogationum </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Formal Problems :: Problems with 'AND' :: Plurium Interrogationum </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Many Questions

   </LI> <LI> Complex Question

   </LI> <LI> Fallacy of interrogation

   </LI> <LI> Fallacy of presupposition

</LI> </UL> <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> An illegitimate use of the 'and' operator.  Two otherwise unrelated points are conjoined and treated as a single proposition.  The reader is expected to accept or reject both together, when in reality one is acceptable while the other is not.

   </P> <P> Occurs when someone demands a simple (or simplistic) answer to a complex question.

</P>  <H4>Examples</H4><UL>            

   <LI> Do you support freedom and the right to bear arms?

   </LI> <LI> Have you stopped using illegal sales practises?

   </LI> <LI> You should support home education and the God-given right of parents to raise their children according to their own beliefs.

   </LI> <LI> Are higher taxes an impediment to business or not?  Yes or no?

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the two propositions illegitimately conjoined and show that believing one does not mean that you have to believe the other.  

 </P>  
<HR><HR><A NAME="66"></A><H2>66.  Loaded Question </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Formal Problems :: Problems with 'AND' :: Loaded Question </I> ]</UL>
<H4>Description</H4>                

   <P> When someone asks a question "loaded" with a false presupposition.

</P>  <H4>Examples:</H4><UL>            

   <LI> Have you stopped beating your wife?  

 </LI> </UL> 
<HR><HR><A NAME="67"></A><H2>67.  False Dilemma </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Formal Problems :: Problems with 'OR' :: False Dilemma </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> False Dichotomy

   </LI> <LI> Bifurcation

   </LI> <LI> False Dilemma

   </LI> <LI> Black-and-White Fallacy

   </LI> <LI> Bogus Dilemma

   </LI> <LI> Either/Or Fallacy

</LI> </UL> <H4>Description</H4>                

   <P> Illegitimate use of the 'or' operator.  Arguer offers a limited number of options (usually two), while in reality there are more options.

   </P> <P> The false assumption is made that only one of a number of alternative holds. @Ref(Varzi)

</P>  <H4>Examples</H4><UL>            

   <LI> Either you're for me or against me.

   </LI> <LI> America: love it or leave it.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the options given and show (with an example) that there is an additional option.  

 </P>  
<HR><HR><A NAME="68"></A><H2>68.  Slippery Slope </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Formal Problems :: Problems with 'IF...THEN...' :: Slippery Slope </I> ]</UL>
<H4>Description</H4><UL>            

   <LI> An illegitimate use of the 'if-then' operator.  In order to show that a proposition P is unacceptable, a sequence of increasingly unacceptable events is shown to follow from P.

   </LI> <LI> Occurs when the conclusion of an argument rests upon an alleged chain reaction, suggesting that a single step in the wrong direction will result in a disastrous or otherwise undesirable outcome.  Its form is:
<PRE>
	A1 -> A2
	A2 -> A3
	...
	An -> An+1
	It should not be the case that An+1.
<B>&#8756;</B>	It should not be the case that An.
</PRE>

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> If I make an exception for you then I have to make an exception for everyone.

   </LI> <LI> You should never gamble.  Once you start gambling you find it hard to stop.  Soon you are spending all your money on gambling, and eventually you will turn to crime to support your earnings.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the proposition P being refuted and identify the final event in the series of events.  Then show that this final event need not occur as a consequence of P.  

 </P>  
<HR><HR><A NAME="69"></A><H2>69.  Constructive & Destructive Dilemmas </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Formal Problems :: Constructive & Destructive Dilemmas </I> ]</UL>
<H4>Description</H4>                

   <P> Many dilemma arguments take one of two forms, the <I>constructive</I> or <I>destructive</I> dilemma.

</P>  <H4>Constructive Dilemma</H4>                

   <P><PRE>
	(P &#8594; Y) &#8743; (Q &#8594; Z)<BR>
	P &#8744; Q
<B>&#8756;</B>	Y &#8744; Z
</PRE>

</P>  <H4>Destructive Dilemma</H4>                

   <P> <PRE>
	(P &#8594; Y) &#8743; (Q &#8594; Z)<BR>
	&#172;Y &#8744; &#172;Z
<B>&#8756;</B>	&#172;P &#8744; &#172;Q
</PRE>

</P>  <H4>Exposition</H4>                

   <P> There are two ways to attach these arguments:<BR>

   </P> <P> <B>Grasping by the horns</B><BR>
   Expose the fallacy by attacking the conjunction of the first premise.  Proceed by proving the conjunctive premise false by proving either conjunct false.

   </P> <P> <B>Escaping between the horns</B><BR>
   Expose the fallacy by attacking the disjunction of the second premise.  Proceed by proving the disjunctive premise false.
  

 </P>  
<HR><HR><A NAME="70"></A><H2>70.  Premises of Definition </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Premises of Definition </I> ]</UL>
<H4>Description</H4>                

   <P> In order to make our words or concepts clear, we use definitions.  The purpose of a definition is to site exactly what a word means.  A good definition should enable a reader to 'pick out' instances of the word or concept with no outside help.  

 </P>  
<HR><HR><A NAME="71"></A><H2>71.  Too Broad </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Premises of Definition :: Too Broad </I> ]</UL>
<H4>Description</H4>                

   <P> The definition includes items which should not be included.

</P>  <H4>Examples</H4><UL>            

   <LI> An apple is something which is red and round.  (So is Mars)

   </LI> <LI> A figure is square if and only if it has four sides of equal length.  (So do trapezoids)

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the term being defined.  Identify the conditions in the definition.  Find an item which meets the condition but is obviously not an instance of the term.  

 </P>  
<HR><HR><A NAME="72"></A><H2>72.  Too Narrow </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Premises of Definition :: Too Narrow </I> ]</UL>
<H4>Description</H4>                

   <P> The definition does not include items which should be included.

</P>  <H4>Examples</H4><UL>            

   <LI> An apple is something which is red and round.  (But Golden Delicious apples are yellow)

   </LI> <LI> A book is pornographic if and only if it contains pitures of naked people.  (What about pornographic novels without pictures?)

   </LI> <LI> Something is music if and only if it is played on a piano.  (So a violin does not play music?)

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the term being defined.  Identify the conditions in the definition.  Find an item which is an instance of the term but does not meet the conditions.  

 </P>  
<HR><HR><A NAME="73"></A><H2>73.  Failure to Elucidate </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Premises of Definition :: Failure to Elucidate </I> ]</UL>
<H4>Description</H4>                

   <P> The definition is harder to understand than the term being defined.

</P>  <H4>Examples</H4><UL>            

   <LI> Someone is lascivious if and only if he is wanton.

   </LI> <LI> An object is beautiful if and only if it is aesthetically successful.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the term being defined.  Identify the conditions in the definition.  Show that the conditions are more clearly defined then the term being defined.  

 </P>  
<HR><HR><A NAME="74"></A><H2>74.  Circular Definition </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Premises of Definition :: Circular Definition </I> ]</UL>
<H4>Description</H4>                

   <P> The definition includes the term being defined as a part of the definition.

</P>  <H4>Examples</H4><UL>            

   <LI> An aminal is human if and only if it has human parents.

   </LI> <LI> A book is pornographic if and only if it contains pornography.

</LI> </UL> <H4>Exposition</H4>                

    <P> Identify the term being defined.  Identiify the conditions in the definition.  Show that at least one term used in the conditions is the same as the term being defined.  

 </P>  
<HR><HR><A NAME="75"></A><H2>75.  Conflicting Conditions </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Factual Claim :: Premises of Definition :: Conflicting Conditions </I> ]</UL>
<H4>Description</H4>                

   <P> The definition is self-contradictory.

</P>  <H4>Examples</H4>                

   <P> A society is free if and only if liberty is maximized and people are required to take responsibility for their actions.

   </P> <P> People are eligible to apply for a learner's permit (to drive) if they have (a) no previous driving experience, (b) access to a vehicle, and (c) experience operating a motor vehicle.  

 </P>  
<HR><HR><A NAME="76"></A><H2>76.  Relevance Claim </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim </I> ]</UL>
<H4>Alternate Names</H4>                

 <H4>Description</H4><UL>            

   <LI> Occur when there is a problem between the relevance of the premises to the conclusion.  Most such arguments may be called 'non secuiturs' (latin, it does not follow).

   </LI> <LI> Often used intentionally to divert attention from the subject.  

 </LI> </UL> 
<HR><HR><A NAME="77"></A><H2>77.  Petitio Principii </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Petitio Principii </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Begging the Question

   </LI> <LI> Request for the source

</LI> </UL> <H4>Description</H4>                

   <P> The arguer creates the illusion that inadequate premises provide adequate support for the conclusion by leaving out a key premise, by restating the conclusion as a premise, or by reasoning in a circle.  After reading or hearing the argument, the observer is inclined to ask, "But how do you know X?" where X is the needed support.  

 </P>  
<HR><HR><A NAME="78"></A><H2>78.  Assume the conclusion </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Petitio Principii :: Assume the conclusion </I> ]</UL>
<H4>Description</H4>                

   <P> The premise of an argument merely states the conclusion in slightly different language.

</P>  <H4>Examples</H4><UL>            

   <LI> Capital punishment is justified for the crimes of murder and kidnapping because it is quite legitimate and appropriate that someone be put ot death for having committed such hateful and inhuman acts.

   </LI> <LI> Anyone who preaches revolution has a vision of the future for the simple reason that if a person has no vision of the future he could not possibly preach revolution.  

 </LI> </UL> 
<HR><HR><A NAME="79"></A><H2>79.  Curcurlus in Probando </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Petitio Principii :: Curcurlus in Probando </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Circular Argument

   </LI> <LI> Vicious Circle

</LI> </UL> <H4>Description</H4>                

   <P> The conclusion restates the premise after a chain of inferences.

</P>  <H4>Examples</H4><UL>            

   <LI> Ford Motor Company clearly produces the finest car in the United States.  We know they produce the finest cars because they have the best design engineers.  This is true because they can afford to pay them more than other manufacturers.  Obviously they can afford to pay them more because they produce the finest cars in the United States.

   </LI> <LI> Either you're for us or against us.  You're obviously not for us.  So, you're against us.

   </LI> <LI> Since I'm not lying, it follows that I'm telling the truth.

   </LI> <LI> We know that God exists, since the Bible says God exists.  What the Bible says must be true, since God wrote it and God never lies.  (Here, we must agree that God exists in order to believe that God wrote the Bible.)

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> Either it's hot or it's cold.  It's not cold.  So, it's hot.

</LI> </UL> <H4>Exposition</H4>                

   <P> Show that in order to believe that the premises are true we must already agree that the conclusion is true.  

 </P>  
<HR><HR><A NAME="80"></A><H2>80.  Appeals to Irrelevant Attributes of the Proponent </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeals to Irrelevant Attributes of the Proponent </I> ]</UL>
<H4>Description</H4>                

   <P> fallacy of irrelevancy involving the origins or history of an idea. It is fallacious to either endorse or condemn an idea based on its past--rather than on its present--merits or demerits, unless its past in some way affects its present value.  For example, the origin of evidence can be quite relevant to its evaluation, especially in historical investigations.  The origin of testimony--whether first hand, hearsay, or rumor--carries weight in evaluating it.

   </P> <P> In contrast, the value of any scientific ideas can be objectively evaluated by established techniques, so that the origin or history of the idea is irrelevant to its value.  For example, the chemist Kekule claimed to have discovered the ring structure of the benzene molecule during a dream of a snake biting its own tail.  While this fact is psychologically interesting, it is neither evidence for nor against the hypothesis that benzene has a ring structure, which had to be tested for correctness.  

 </P>  
<HR><HR><A NAME="81"></A><H2>81.  Ad Hominem </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeals to Irrelevant Attributes of the Proponent :: Ad Hominem </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Appeal against the Person

</LI> </UL> <H4>Description</H4><UL>            

   <LI> Attempt to discredit a claim or proposal by attacking its proponents (diverting attention from the argument).  

 </LI> </UL> 
<HR><HR><A NAME="82"></A><H2>82.  Abusive </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeals to Irrelevant Attributes of the Proponent :: Ad Hominem :: Abusive </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum Ad hominem abusive

   </LI> <LI> Style Over Substance

</LI> </UL> <H4>Description</H4>                

   <P> Attack a person's age, character, family, gender, ethnicity, social or economic status, personality, appearance, dress, behavior or professional, political or religious affiliations.

</P>  <H4>Examples</H4><UL>            

   <LI> Nixon lost the presidential debate because of the sweat on his forehead.

   </LI> <LI> Trudeau knows how to move a crowd.  He must be right.

   </LI> <LI> Why don't you take the advice of that nicely dressed young man?

   </LI> <LI> Jones advocates fluoridation<BR>
   Jones is a convicted thief.<BR>
   <B>&#8756;</B> We should not fluoridate.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the attack and show that the character or circumstances of the person has nothing to do with the truth or falsity of the proposition being defended.  

 </P>  
<HR><HR><A NAME="83"></A><H2>83.  Circumstantial </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeals to Irrelevant Attributes of the Proponent :: Ad Hominem :: Circumstantial </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum Ad hominem circumstantial

</LI> </UL> <H4>Description</H4>                

   <P> An attempt to refute a claim (or divert attention from the real issue) by attacking not it's proponent, but the proponent's circumstances including beliefs, affiliation, association or assumptions.

   </P> <P> The ad hominem circumstantial is easy to recognize because it always takes this form:
<PRE>
   Of course Mr. X argues this way; just look at the circumstances that affect him.
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> We should disregard Share B.C.'s argument because they are being funded by the logging industry.

   </LI> <LI> Jones advocates fluoridation.<BR>
   Jones hangs around criminals.<BR>
   <B>&#8756;</B> We should not fluoridate.<BR>

   </LI> <LI> Jones supports fluoridation.<BR>
   Jones own's a fluoridation firm.<BR>
   <B>&#8756;</B> We should not fluoridate.<BR>

   </LI> <LI> Jones denys superstition.<BR>
   He says breaking a mirror is bad.<BR>
   <B>&#8756;</B> breaking a mirror is probably bad.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the attack and show that the character or circumstances of the person has nothing to do with the truth or falsity of the proposition being defended.

</P>  <H4>Notes</H4><UL>            

   <LI> Sometimes this is called 'Guilt by Association Fallacy', but only when it refers to the associations held by the arguer.  

 </LI> </UL> 
<HR><HR><A NAME="84"></A><H2>84.  Guilt by Association </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeals to Irrelevant Attributes of the Proponent :: Ad Hominem :: Circumstantial :: Guilt by Association </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Bad Company Fallacy

   </LI> <LI> The Company that You Keep Fallacy

</LI> </UL> <H4>Description</H4>                

   <P>This fallacy has the following form:

<PRE>
	Person P accepts idea I.
	<B>&#8756;</B> I must be wrong.  (Since nobody want's to be associated with P).
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> Hitler was in favor of euthanasia.  Therefore, euthanasia is wrong.

   </LI> <LI> The Nazis favored eugenics.  So, eugenics is wrong.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> Hitler was a vegetarian.  So, vegetarianism is wrong.

   </LI> <LI> The Nazis were conservationists.  So, conservationism is wrong.  

 </LI> </UL> 
<HR><HR><A NAME="85"></A><H2>85.  Tu Quoque </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeals to Irrelevant Attributes of the Proponent :: Ad Hominem :: Tu Quoque </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> You too

</LI> </UL> <H4>Description</H4>                

   <P> Refute a claim by attacking its proponent on the grounds that he or she is a hypocrite (not practice what he preaches), upholds a double standard of conduct, or is selective and therefore inconsistent in enforcing a principle.

</P>  <H4>Examples</H4><UL>            

   <LI> You say I shouldn't drink, but you haven't been sober for more than a year.

   </LI> <LI> Jones believes we shouldn't drink.<BR>
   Jones is an alcoholic.<BR>
   <B>&#8756;</B> We shouldn't abstain from liquor.<BR>
   @ref(Varzi)

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the attack and show that the character or circumstances of the person has nothing to do with the truth or falsity of the proposition being defended.  

 </P>  
<HR><HR><A NAME="86"></A><H2>86.  Situational Fallacies </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Situational Fallacies </I> ]</UL>
<H4>Description</H4>                

   <P> Arguments in support of or against someone by brining up their situation.  

 </P>  
<HR><HR><A NAME="87"></A><H2>87.  Ad Misercordiam </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Situational Fallacies :: Ad Misercordiam </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad misercordiam

   </LI> <LI> Appeal to Pity

</LI> </UL> <H4>Description</H4>                

   <P> Such arguments ask us to excuse or forgive an action on the grounds of extenuating circumstances.  They seek clemency for breaches of duty, or sympathy for someone whose poor conduct or noncompliance with a rule is already established.  An appeal to pity may be either legitimate or fallacious, depending on whether or not the allegedly extenuating circumstances are genuinely relvant to the case.

</P>  <H4>Examples</H4><UL>            

   <LI> We hope you'll accept our recommendations.  We spend the last three months working extra time on it.

   </LI> <LI> Oh, officer, you see my baby here was crying for some candy and I took her to the candy store before I came back to my car.<BR>
   <B>&#8756;</B> Therefore, you shouldn't give me a parking ticket.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the proposition and the appeal to pity and argue that the pitiful state of the arguer has nothing to do with the truth of the proposition.  

 </P>  
<HR><HR><A NAME="88"></A><H2>88.  Ad Novitatem </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Situational Fallacies :: Age :: Ad Novitatem </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad novitatem
   </LI> <LI> Appeal to the new(er)

</LI> </UL> <H4>Description</H4>                

   <P> The fallacy of asserting that something is better or more correct simply because it is new, or newer than something else.

</P>  <H4>Example</H4><UL>            

   <LI> BeOS is a far better choice of operating system than OpenStep, as it has a much newer design.  

 </LI> </UL> 
<HR><HR><A NAME="89"></A><H2>89.  Ad Antiquitatem </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Situational Fallacies :: Age :: Ad Antiquitatem </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad antiquitatem
   </LI> <LI> Appeal to antiquity
   </LI> <LI> Appeal to tradition

</LI> </UL> <H4>Description</H4>                

   <P> Something is said to be right or good because it is old.

</P>  <H4>Examples</H4><UL>            

   <LI> "For thousands of years Christians have believed in Jesus.  Christianity must be true, to have persisted so long even in the face of persecution."  

 </LI> </UL> 
<HR><HR><A NAME="90"></A><H2>90.  Ad Crumenam </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Situational Fallacies :: Social Class :: Ad Crumenam </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad crumenam

   </LI> <LI> Appeal to the wealthy

</LI> </UL> <H4>Description</H4>                

   <P> Believe that those with more money are more likely to be right.

</P>  <H4>Example</H4><UL>            

   <LI> Microsoft software is undoubtedly superior; why else would Bill Gates have got so rich?  

 </LI> </UL> 
<HR><HR><A NAME="91"></A><H2>91.  Ad Lazarum </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Situational Fallacies :: Social Class :: Ad Lazarum </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad lazarum
   </LI> <LI> Appeal to the poor

</LI> </UL> <H4>Description</H4>                

   <P> Statement that someone poor is sounder or more virtuous than someone wealthier.

</P>  <H4>Example</H4><UL>            

   <LI> Monks are more likely to possess insight into the meaning of life, as they have given up the distractions of wealth.  

 </LI> </UL> 
<HR><HR><A NAME="92"></A><H2>92.  Ad Verecundiam </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Ad Verecundiam </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad verecundiam

   </LI> <LI> Appeal to (unqualified) Authority

   </LI> <LI> Argument from Authority

   </LI> <LI> Ipse Dixit

</LI> </UL> <H4>Description</H4>                

   <P> While sometimes it may be appropirate to cite an authority to support a point, often it is not.  In particular, an appeal to authority is inappropriate if:

</P>  <H4></H4><UL>            

   <LI> The person is not qualified to have an expert opinion on the subject.

   </LI> <LI> Experts in the field disagree on this issue.

   </LI> <LI> The authority was making a joke, drunk, or otherwise not being serious.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> Noted psychologist Dr. Frasier Crane recommends that you buy the EZ-Rest Hot Tub.

   </LI> <LI> Economist John Kenneth Galbraith argues that a tight money policy is the best cure for a recession.

   </LI> <LI> My teacher says that I should be proud to be an American.<BR>
   <B>&#8756;</B> Therefore, I should be proud to be an American.<BR>
   (Varzi)

</LI> </UL> <H4>Exposition</H4>                

   <P> Show that either 1. the person cited is not an authority in the field, or that 2. there is general disagreement among the experts in the field on this point.  

 </P>  
<HR><HR><A NAME="93"></A><H2>93.  ... Anonymous </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Ad Verecundiam :: ... Anonymous </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad verecundiam, anonymous

</LI> </UL> <H4>Description</H4>                

   <P> The authority in question is not named.

</P>  <H4>Examples</H4><UL>            

   <LI> A government official said today that the new gun law will be proposed tomorrow.

   </LI> <LI> Experts agree that the best way to prevent nuclear war is to prepare for it.

   </LI> <LI> It is held that there are more than two million needless operations conducted every year.

   </LI> <LI> Rumor has it that the Prime Minister will declare another holiday in October.

</LI> </UL> <H4>Exposition</H4>                

   <P> Argue that because we dont' know the source of the information we have no way to evaluation the reliability of the information.  

 </P>  
<HR><HR><A NAME="94"></A><H2>94.  Ad Populum </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Ad Verecundiam :: Ad Populum </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum Ad Populum

   </LI> <LI> Appeal to the People/Popularity

</LI> </UL> <H4>Description</H4>                

   <P> A proposition is held to be true because it is widly held to be true or is held to be true by some (usually upper crust) sector of the population.  This is an appeal to emotion because it creates the fear or thought in the listener's mind that he doesn't want to be left behind.  The basic structure of the ad populum is:

<PRE>
   You want to be accepted/included-in-the-group/loved/esteemed...Therefore, you should accept XYZ as true.
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> Everyone knows that the earth is flat, so why do you persist in your outlandish claims?

   </LI> <LI> If you were beautiful, you could live like this, so buy Buty-EZ and become beautiful.

   </LI> <LI> Everyone believes premarital sex is wrong.<BR>
   <B>&#8756;</B> Premarital sex is wrong.

   </LI> <LI> Discriminating palates prefer wine brand X.<BR>
   <B>&#8756;</B> You should drink wine brand X.  

 </LI> </UL> 
<HR><HR><A NAME="95"></A><H2>95.  direct </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Ad Verecundiam :: Ad Populum :: direct </I> ]</UL>
<H4>Description</H4>                

   <P> In the direct form of ad populum, the arguer attempts to rouse the croud to create the 'sheep or mob mentality' for purposes of convincing.  Since people don't want to be 'left out', they will often accept the conclusions of the speeker without real evidence.

</P>  <H4>Examples</H4><UL>            

   <LI> Aldolf Hitler was a master at this.

   </LI> <LI> Peer pressure.  

 </LI> </UL> 
<HR><HR><A NAME="96"></A><H2>96.  indirect </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Ad Verecundiam :: Ad Populum :: indirect </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Bandwagon argument

</LI> </UL> <H4>Description</H4>                

   <P> The arguer claims that everybody's doing it.

</P>  <H4>Examples</H4><UL>            

   <LI> Of course you want to buy Zing toothpaste.  Why, 90 percent of America brushes with Zing.  

 </LI> </UL> 
<HR><HR><A NAME="97"></A><H2>97.  Hearsay </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Ad Verecundiam :: Hearsay </I> ]</UL>
<H4>Description</H4>                

   <P> A variation is hearsay, which depends upon second or third hand sources.  

 </P>  
<HR><HR><A NAME="98"></A><H2>98.  Ad Baculum </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeal to Emotion :: Ad Baculum </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad baculum

   </LI> <LI> Appeal to the stick

   </LI> <LI> Appeal to Force

</LI> </UL> <H4>Description</H4>                

   <P> Threats or intimidation.  The reader is told that unpleasant consequences will follow if they do not agree with the author.

</P>  <H4>Examples</H4><UL>            

   <LI> You had better agree that the new company policy is the best beet if you expect to keep your job.

   </LI> <LI> If you dont' vote for me, I'll break your leg.<BR>
   <B>&#8756;</B> You ought to vote for me.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the threat and the proposition and agree that the threat is unrelated to the truth or falsity of the proposition.  

 </P>  
<HR><HR><A NAME="99"></A><H2>99.  Ad Invidiam </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeal to Emotion :: Ad Invidiam </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad invidiam

   </LI> <LI> Appeal to envy  

 </LI> </UL> 
<HR><HR><A NAME="100"></A><H2>100.  Ad Metum </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeal to Emotion :: Ad Metum </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad Metum

   </LI> <LI> Appeal to Fear  

 </LI> </UL> 
<HR><HR><A NAME="101"></A><H2>101.  Ad Odium </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeal to Emotion :: Ad Odium </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad Odium

   </LI> <LI> Appeal to Hatred
  

 </LI> </UL> 
<HR><HR><A NAME="102"></A><H2>102.  Ad Superbium </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeal to Emotion :: Ad Superbium </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad Superbium

   </LI> <LI> Appeal to Pride  

 </LI> </UL> 
<HR><HR><A NAME="103"></A><H2>103.  Ad Consequentiam </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeal to Emotion :: Ad Consequentiam </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad consequentiam

   </LI> <LI> Appeal to the consequence

</LI> </UL> <H4>Description</H4>                

   <P> The author points out the the disagreeable consequence of holding a particular belief in order to show that this belief is false.

</P>  <H4>Examples</H4><UL>            

   <LI> You can't agree that evolution is true, because if it were, then we would be no better than monkeys and apes.

   </LI> <LI> You must believe in God, for otherwise life would have no meaning.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the consequences to and argue that what we want to be the case does not affect what is in fact the case.  

 </P>  
<HR><HR><A NAME="104"></A><H2>104.  Appeal to Vanity/Snobbery </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeal to Emotion :: Appeal to Vanity/Snobbery </I> ]</UL>
<H4>Description</H4>                

   <P> Associtates the product with someone who is admired, pursued, or imitated, the ida being that you, too, will be admired and pursued if you use it.

</P>  <H4>Examples:</H4><UL>            

   <LI> The few, the Proud, the marines.  

 </LI> </UL> 
<HR><HR><A NAME="105"></A><H2>105.  Divine Fallacy </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeal to Emotion :: Divine Fallacy </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argument from incredulity.

</LI> </UL> <H4>Description</H4>                

   <P> Arguing that because you can't explain something or that something is unexplainable given current knowledge/technology, that it must be an act of god, divinity, paranormal phenomenon or "UFO".  

 </P>  
<HR><HR><A NAME="106"></A><H2>106.  Pragmatic Fallacy </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeal to Emotion :: Pragmatic Fallacy </I> ]</UL>
<H4>Description</H4>                

   <P> Committed when one argues that something is true because it works.  (Eg. Astrology, numerology, etc.)  

 </P>  
<HR><HR><A NAME="107"></A><H2>107.  Ad Ignorantiam </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Ad Ignorantiam </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum Ad Ignorantiam

   </LI> <LI> Argument from Ignorance

   </LI> <LI> Appeal to Ignorance

</LI> </UL> <H4>Description</H4>                

   <P> Has one of the following two forms:<BR>
<PRE>
	It has not been proved that P.
<B>&#8756;</B>	It is not the case that P.
</PRE>
<B> or </B>
<PRE>
	It has not been proved that not-P.
<B>&#8756;</B>	P.
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> Since you cannot prove that ghosts do not exist, they must exist.

   </LI> <LI> No one has ever proved that God exists.<BR>
   <B>&#8756;</B> God does not exist.

   </LI> <LI> No one has every proved that God does not exist.<BR>
   <B>&#8756;</B> God exists.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the proposition in question.  Argue that it may be true even though we don't know whether it is or isn't.  

 </P>  
<HR><HR><A NAME="108"></A><H2>108.  Burden of Proof </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Ad Ignorantiam :: Burden of Proof </I> ]</UL>
<H4>Description</H4>                

   <P> The burden of proof is always on the person asserting something.  Shifting the burden of proof is the fallacy of putting the burden of proof on the person who denies or questions the assertion.  The source of the fallacy is the assumption that something is true unless proven otherwise.  (a special case of ad ignorantiam)

</P>  <H4>Examples</H4><UL>            

   <LI> Ok, so if you don't think the grey aliens have gained control of the US government, can you prove it?

</LI> </UL> <H4>Exposition</H4>                  

  
<HR><HR><A NAME="109"></A><H2>109.  Prejudicial Language </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Prejudicial Language </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Prejudicial Language

</LI> </UL> <H4>Description</H4>                

   <P> Loaded or emotive terms are used to attach value or moral goodness to believing the proposition.

</P>  <H4>Examples</H4><UL>            

   <LI> Right thinking Canadians will agree with me that we should have another free vote on capital punishment.

   </LI> <LI> A reasonable person would agree that our income statement is too low.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the prejudicial terms used (eg. "right thinking Canadians").  Show that disagreeing with the conclusion does not make a person "wrong thinking".

</P>  <H4>Notes:</H4><UL>            

   <LI> Often the word "proof" is added to an inductive argument to make it appear soundly deductive.  

 </LI> </UL> 
<HR><HR><A NAME="110"></A><H2>110.  Ad Nauseam </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Ad Nauseam </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum ad nauseam

</LI> </UL> <H4>Description</H4>                

   <P> Belief that an assertion is more likely to be true, or more likely to be accepted as true, the more often it is heard.  

 </P>  
<HR><HR><A NAME="111"></A><H2>111.  Wishful Thinking </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Wishful Thinking </I> ]</UL>
<H4>Description</H4>                

<P> Takes the form:

<PRE>
	I want P to be true..
<B>&#8756;</B>	P is true.
</PRE>  

 </P>  
<HR><HR><A NAME="112"></A><H2>112.  Appeal to Nature </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Appeal to Nature </I> ]</UL>
<H4>Alterante Names</H4><UL>            

   <LI> Natural Law Fallacy

</LI> </UL> <H4>Description</H4>                

 <H4>Examples</H4><UL>            

   <LI> Of course homosexuality is unnatural.  When's the last time you saw two animals of the same sex mating?"  

 </LI> </UL> 
<HR><HR><A NAME="113"></A><H2>113.  Straw Man </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Straw Man </I> ]</UL>
<H4>Description</H4>                

   <P> An attempt to refute a claim by confusing it with a less plausible claim and then attacking that less plausible claim instead of addressing the original issue.

</P>  <H4>Examples</H4><UL>            

   <LI> People who opposed the Charlottown Accord probably just wanted Quebec to separate.  But we want Quebec to stay in Canada.

   </LI> <LI> We should have conscription.  People cdon't what to enter the military because they find it an inconvenience.  But they should realize that there are more important things than convenience.

   </LI> <LI> There can be no truth if everything is relative.<BR>
   <B>&#8756;</B> Einstein's theory of relativity cannot be true.

</LI> </UL> <H4>Exposition</H4>                

   <P> Show that the opposition's argument has been misrepresented by showing that the opposition has a stronger argument.  Describe the stronger argument.  

 </P>  
<HR><HR><A NAME="114"></A><H2>114.  Red Herring </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Red Herring </I> ]</UL>
<H4>Description</H4>                

   <P> Intentionally divert attention away from the subject.  

 </P>  
<HR><HR><A NAME="115"></A><H2>115.  Two Wrongs Make a Right </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Two Wrongs Make a Right </I> ]</UL>
<H4>Examples</H4><UL>            

   <LI> The operation cost just under $500, and no one was killed, or even hurt.  In that same time the Pentagon spend tens of millions of dollars and dropped tens of thousands of pounds of explosives on Viet Nam, killing or wounding thousands of human beings, causing hundreds of millions of dollars of damage.  Because nothing justified their actions in our calculus, nothing could contradict the merit of ours.  

 </LI> </UL> 
<HR><HR><A NAME="116"></A><H2>116.  Dicto Simpliciter </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Relevance Claim :: Dicto Simpliciter </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Accident

   </LI> <LI> Sweeping Generalization

   </LI> <LI> A Dicto simpliciter Ad Dictum Secundum Quid

</LI> </UL> <H4>Description</H4>                

   <P> A general rule is misapplied to a specific cast it was not indended to cover.  This rule is the irrelevant to the argument.

   </P> <P> A fallacy of the following form:
<PRE>
	Xs are normally Ys.
	c is an X.  (Where c is abnormal)
<B>&#8756;</B>	c is a Y.
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> Birds normally can fly.  Tweety the Penguin is a bird.  So, Tweety can fly.

   </LI> <LI> The law says that you should not travel faster than 65mph, thus even though your father could not breathe, you should not have travelled faster than 65mph.

   </LI> <LI> It is good to return things you have borrowed.  Therefore, you should return this automatic rifle from the madman you borrowed it from.

   </LI> <LI> Christians generally dislike atheists.  You are a christian, so you must dislike atheists.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the generalization in question and show that it is not a universal generalization.  Then show that the circumstances of this case suggest that the generalization ought not to apply.  

 </P>  
<HR><HR><A NAME="117"></A><H2>117.  Fallacies of Deductive Rules </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Formal Fallacies

</LI> </UL> <H4>Description</H4>                

   <P> A formal fallacy is based solely on logical form.  It is committed when someone uses a valid deductive inference rule incorrectly or when someone 'invents' an invalid inference rule.  

 </P>  
<HR><HR><A NAME="118"></A><H2>118.  Syllogistic Fallacies </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Syllogistic Fallacies </I> ]</UL>
<H4>Description</H4>                

   <P> Involves an illicit rule of deduction applied to a syllogism.  

 </P>  
<HR><HR><A NAME="119"></A><H2>119.  Existential Fallacy </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Syllogistic Fallacies :: Existential Fallacy </I> ]</UL>
<H4>Description</H4>                

   <P> A standard form categorical syllogism with two universal premises has a particular conclusion.

</P>  <H4>Examples</H4><UL>            

   <LI> All mice are animals, and all animals are dangerous, so some mice are dangerous.

   </LI> <LI> No honest people steal, and all honest people pay taxes, so some honest people pay taxes.

</LI> </UL> <H4>Exposition</H4>                

   <P> Assume that the premises are true, but that there are no instances of the category described.  For example, in 1. above, assume there are no mice, and in 2. above, assume there are no honest people.  This shows that the conclusion is false.  

 </P>  
<HR><HR><A NAME="120"></A><H2>120.  Exclusive Premises </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Syllogistic Fallacies :: Exclusive Premises </I> ]</UL>
<H4>Alternative Names</H4><UL>            

   <LI> Two Negative Premises

   </LI> <LI> Exclusive Premises

</LI> </UL> <H4>Description</H4>                

   <P> Any categorical syllogism with two negative premises.

</P>  <H4>Examples</H4>                

   <P> No moslems are christians.  No jews are moslems.  So, no jews are christians.

</P>  <H4>Counter-Examples</H4>                

   <P> No reptiles are mammals.  No dogs are reptiles.  So, no dogs are mammals.  

 </P>  
<HR><HR><A NAME="121"></A><H2>121.  Undistributed Middle </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Syllogistic Fallacies :: Undistributed Middle </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Undistributed Middle Term

</LI> </UL> <H4>Description</H4>                

   <P> Any form of categorical syllogism in which the middle term is not distributed at least once.

</P>  <H4>Examples</H4><UL>            

   <LI> All communists are liberals.  All democrats are liberals.  So, all democrats are communists.  

 </LI> </UL> 
<HR><HR><A NAME="122"></A><H2>122.  Illicit Process </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Syllogistic Fallacies :: Illicit Process </I> ]</UL>
<H4>Description</H4>                

   <P> Any form of categorical syllogism in which a term which is distributed in the conclusion is undistributed in a premise  

 </P>  
<HR><HR><A NAME="123"></A><H2>123.  Illicit Major </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Syllogistic Fallacies :: Illicit Process :: Illicit Major </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Illicit Process of the Major Term

</LI> </UL> <H4>Description</H4>                

   <P> Any form of categorical syllogism in which the major term is distributed in the conclusion, but not in the major premise.

</P>  <H4>Examples</H4><UL>            

   <LI> All communists are leftists.  No conservatives are communists.  So, no conservatives are leftists.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> All dogs are animals.  No cats are dogs.  So, no cats are animals.

   </LI> <LI> All Texans are Americans, and no Californians are Texans, therefore, no Californians are Americans.

</LI> </UL> <H4>Exposition</H4>                

   <P> Show that there may be other members of the predicate category not mentioned in the premises which are contrary to the conclusion.  

 </P>  
<HR><HR><A NAME="124"></A><H2>124.  Illicit Minor </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Syllogistic Fallacies :: Illicit Process :: Illicit Minor </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Illicit Process of the Minor Term

</LI> </UL> <H4>Description</H4>                

   <P> Any form of categorical syllogism in which the minor term is distributed in the conclusion but not in the minor premise.

</P>  <H4>Examples</H4><UL>            

   <LI> All terrorists are extremists.  All extremists are radicals.  So, all radicals are terrorists.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> All whales are mammals.  All mammals are animals.  So, all animals are whales.  

 </LI> </UL> 
<HR><HR><A NAME="125"></A><H2>125.  Illicit Affirmative/Negative </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Syllogistic Fallacies :: Illicit Affirmative/Negative </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Affirmative Conclusion from Negative Premises

   </LI> <LI> Negative Conclusion from Affirmative Premises

</LI> </UL> <H4>Description</H4>                

   <P> Any form of categorical syllogism with a negative conclusion and affirmative premises.

</P>  <H4>Examples</H4><UL>            

   <LI> All sound arguments are valid.  Some fallacious arguments are sound.  So, some fallacious arguments are not valid.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> All dogs are animals.  Some pets are dogs.  So, some pets are not animals.  

 </LI> </UL> 
<HR><HR><A NAME="126"></A><H2>126.  Syllogistic Fallacy </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Syllogistic Fallacies :: Illicit Affirmative/Negative :: Syllogistic Fallacy </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Affirmative Conclusion from a Negative Premise

</LI> </UL> <H4>Description</H4>                

   <P> Any form of categorical syllogism with an affirmative conclusion and at least one negative premise.

</P>  <H4>Examples</H4><UL>            

   <LI> All judges are politicians.  Some lawyers are not judges.  Therefore, some lawyers are politicians.

   </LI> <LI> All mice are animals, and some animals are not dangerous, therefore some mice are dangerous.

   </LI> <LI> No honest people steal, and all honest people pay taxes, so some people who steal pay taxes.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> All whales are mammals.  Some fish are not whales.  Therefore, some fish are mammals

Exposition

- Assume that the premises are true.  Find an example which allows the premises to be true but which clearly contradicts the conclusion.  

 </LI> </UL> 
<HR><HR><A NAME="127"></A><H2>127.  Illicit Conversion </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Syllogistic Fallacies :: Illicit Transform :: Illicit Conversion </I> ]</UL>
<H4>Description</H4>                

   <P> Treating the contraposition of an A or O proposition as an immediate inference.  

 </P>  
<HR><HR><A NAME="128"></A><H2>128.  Illicit Contraposition </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Syllogistic Fallacies :: Illicit Transform :: Illicit Contraposition </I> ]</UL>
<H4>Description</H4>                

   <P> Treating the contraposition of an E or I proposition as an immediate inference.  

 </P>  
<HR><HR><A NAME="129"></A><H2>129.  Propositional Logic </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Propositional Logic </I> ]</UL>
<H4>Description</H4>                

   <P> Involves an illicit rule of deduction applied to a truth-functional operator.  

 </P>  
<HR><HR><A NAME="130"></A><H2>130.  Affirming a Disjunct </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Propositional Logic :: Affirming a Disjunct </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Affirming One Disjunct

   </LI> <LI> The Fallacy of the Alternate Syllogism

   </LI> <LI> Asserting an Alternative

   </LI> <LI> Improper Disjunctive Syllogism

</LI> </UL> <H4>Description</H4>                

   <P>Or is used exclusively though logic defines it as inclusive.  This fallacy has the followng form.
<PRE>
	P or Q.
	P.
<B>&#8756;</B>	not Q.
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> Today is Saturday or Sunday.<BR>
   Today is Saturday.<BR>
   <B>&#8756;</B> Today is not Sunday.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI>As with other propositional fallacies, an argument which affirms a disjunct is most likely to seem valid when we take into consideration some further information not explicitly mentioned in the argument.  In the case of affirming a disjunct, this is:<BR>

<PRE>
   implied premise: Not both P and Q.
</PRE>

   </LI> <LI> If we have some reason to believe that the two disjuncts are contraries, then the argument may be a valid enthymeme.  In contrast, if we cannot rule out the truth of both disjuncts, then the argument is fallacious.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> Today is Saturday or Sunday.  Today is Saturday.  So, today is not Sunday.

</LI> </UL> <H4>Exposition</H4>                

   <P> In common language 'OR' is either inclusive or exclusive.  As a form of argument, Affirming One Disjunct is perfectly valid for the exclusive sense of 'OR'.  It is only for the inclusive sense that it is fallacious.  For this reason, there is a problem of ambiguity of the two forms, which faces the application of Affirming One Disjunct as a fallacy.  In order to accuse an argument of committing this fallacy, we must sdetermine in which sense the 'or' in the first premise is used.  

 </P>  
<HR><HR><A NAME="131"></A><H2>131.  Affirming the Consequent </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Propositional Logic :: Affirming the Consequent </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Asserting the Consequent

   </LI> <LI> Affirmation of the Consequent

</LI> </UL> <H4>Description</H4>                

   <P> Fallacies of the form:
<PRE>
	if P then Q.
	Q
<B>&#8756;</B>	P
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> If it's raining, the streets will be wet.  The streets are wet.  So, it's raining.

</LI> </UL> <H4>Counter-Examples</H4>                

   <P> If it's snowing, the streets will be covered with snow.  The streets are covered with snow.  So, it's snowing.  

 </P>  
<HR><HR><A NAME="132"></A><H2>132.  Commutation of Conditionals </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Propositional Logic :: Commutation of Conditionals </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Converting a Conditional

   </LI> <LI> The Fallacy of the Consequent

</LI> </UL> <H4>Description</H4>                

   <P> Argument takes the form:
<PRE>
	if A then B.
<B>&#8756;</B>	if B then A.
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> If James is a bachelor, then he is unmarried.  So, If he's unmarried, then he's a bachelor.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> If James is President, then he is over 35.  So, if James is over 35, then he is president.  

 </LI> </UL> 
<HR><HR><A NAME="133"></A><H2>133.  Denying a Conjunct </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Propositional Logic :: Denying a Conjunct </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> The Fallacy of the Disjunctive Syllogism

</LI> </UL> <H4>Description</H4>                

   <P> Of the form:
<PRE>
	Not both P and Q
	Not P
<B>&#8756;</B>	Q
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> It isn't both sunny and overcast.  It isn't sunny.  It's overcast.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> It isn't both raining and snowing.  It isn't raining.  It's snowing.  

 </LI> </UL> 
<HR><HR><A NAME="134"></A><H2>134.  Denying the Antecedent </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Propositional Logic :: Denying the Antecedent </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Denial of the Antecedent

</LI> </UL> <H4>Description</H4>                

   <P> Any argument of the following form is invalid:
<PRE>
	if A then B
	not A
<B>&#8756;</B>	not B
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> If you get hit by a car when you are six then you will die young.  But you were not hit by a car when you were six.  Thus you will not die young.

   </LI> <LI> If I am in Calgary then I am in Alberta.  I am not in Calgary, thus, I am not in Alberta.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> If it's raining, then the streets are wet.  It isn't raining.  So, the streets aren't wet.

</LI> </UL> <H4>Exposition</H4><UL>            

   <LI> Show that even though the premises are true, the conclusion may be false.  In particular, show that the consequence B may occur even though A does not occur.

   </LI> <LI> Together with Affirming the Consequent, this is a fallacy which involves either confusion about the direction of a conditional relation, or a conflating of a conditional with a biconditional proposition.  

 </LI> </UL> 
<HR><HR><A NAME="135"></A><H2>135.  Improper Transposition </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Propositional Logic :: Improper Transposition </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Negating Antecedent and Consequent

</LI> </UL> <H4>Description</H4>                

   <P> Fallacy of the form:
<PRE>
	If P then Q.
<B>&#8756;</B>	If not P then not Q.
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> If there's a fire, then there's smoke.  So, if there's no fire, then there's no smoke.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> If we guillotine the king, then he will die.  So, if we don't guillotine the king, then he won't die.  

 </LI> </UL> 
<HR><HR><A NAME="136"></A><H2>136.  Quantified Logic </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Quantified Logic </I> ]</UL>
<H4>Description</H4>                

   <P> Involves an illicit rule of deduction applied to a quantifier.  

 </P>  
<HR><HR><A NAME="137"></A><H2>137.  Illicit Converstion </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Quantified Logic :: Illicit Converstion </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> False Conversion

</LI> </UL> <H4>Description</H4>                

   <P> Syllogisms of the form:
<PRE>
	All P are Q.
<B>&#8756;</B>	All Q are P.
</PRE>

<B>or</B>

<PRE>
	Some P are not Q.
<B>&#8756;</B>	Some Q are not P.
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> All comunists are atheists.  So, all atheises are communists.

   </LI> <LI> Some dogs are not pets.  So, some pets are not dogs.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> All dogs are mammals.  So, all mammals are dogs.

   </LI> <LI> Some mammals are not cats.  So, some cats are not mammals.  

 </LI> </UL> 
<HR><HR><A NAME="138"></A><H2>138.  Some Are/Some Are Not </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Quantified Logic :: Some Are/Some Are Not </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Unwarranted Contrast

</LI> </UL> <H4>Description</H4>                

   <P> Fallacies of the form:
<PRE>
	Some S are P
<B>&#8756;</B>	Some S are not P.
</PRE>

<B>or</B>

<PRE>
	Some S are not P.
<B>&#8756;</B>	Some S are P.
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> Some politicians are crooks.  So, some politicians are not crooks.<BR>
   'some' means anywhere from 1 to all.  And since it's possible that it refers to all, the conclusion cannot be true.  

 </LI> </UL> 
<HR><HR><A NAME="139"></A><H2>139.  Illicit Substitution of Identicals </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Deductive Rules :: Fallacies involving Identity :: Illicit Substitution of Identicals </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI>Masked Man Fallacy

</LI> </UL> <H4>Description</H4>                

   <P> Fallacies of the form

<PRE>
	a = b
	Ca
<B>&#8756;</B>	Cb
</PRE>

<B>or</B>

<PRE>
	Ca
	Not Cb
<B>&#8756;</B>	not a = b
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> The witness claims that the masked man committed the crime.  The witness denies that Mr. Hyde committed the crime.  Therefore, Mr. Hyde is not the masked man.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> The witness claims that the masked man committed the crime.  The masked man is Mr. Hyde.  Thereforme, the witness claims that Mr. Hyde committed the crime.  

 </LI> </UL> 
<HR><HR><A NAME="140"></A><H2>140.  Fallacies of Induction </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction </I> ]</UL>
<H4>Alternate Name</H4><UL>            

   <LI> Weak Induction

</LI> </UL> <H4>Description</H4>                

   <P> Occur when the inductive probability of an argument (i.e. The probability of its conclusion given its premises) is low, or at least lower than the arguer thinks it is.  

 </P>  
<HR><HR><A NAME="141"></A><H2>141.  Relating to Structure </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Relating to Structure </I> ]</UL>
<H4>Description</H4>                

   <P> Occur when the author mistakenly assumes that the whole is nothing more than the sum of its parts.  However, things joined together may have different properties as a whole than any of them do separately.  

 </P>  
<HR><HR><A NAME="142"></A><H2>142.  Composition </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Relating to Structure :: Composition </I> ]</UL>
<H4>Description</H4>                

   <P> Occurs when we invalidly impute characteristics of one or more parts of a thing to the whole of which they are parts.

<PRE>
Form:

	P1, ..., Pn are parts of W.
	P1, ..., Pn have property F.
<B>&#8756;</B>	W has property F.
</PRE>

</P>  <H4>Example</H4><UL>            

   <LI> The brick wall is six feet tall.  Thus, the bricks in the wall are six feet tall.

   </LI> <LI> Germany is a militant country.  Thus, each German is militant.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> The human body is made up of atoms, which are invisible.  So, The body is invisible.

</LI> </UL> <H4>Exposition</H4>                

   <P> Show that the properties in question are properties of the whole, and not of each part or member of the whole.  If necessary, describe the parts to show that they could not have the properties of the whole.  

 </P>  
<HR><HR><A NAME="143"></A><H2>143.  Division </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Relating to Structure :: Division </I> ]</UL>
<H4>Alternate Names</H4><UL>            

</UL> <H4>Description</H4>                

   <P> If something has property F, Then its parts are attributed with property F.

</P>  <H4>Examples</H4><UL>            

   <LI> The universe has existed for fifteen billion years.  The universe is made out of molecules.  So, each of the molecules in the universe has existed for fifteen billion years.

   </LI> <LI> Each brick is three inches high, thus, the brick wall is three inches high.

   </LI> <LI> Because the brain is capable of consciousness, each neural cell in the brain must be capable of consciousness.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> People are made out of atoms.  People are visible.  So, atoms are visible.

</LI> </UL> <H4>Exposition</H4>                

   <P> Whow that the properties in question are the properties of the parts, and not of the whole.  If necessary, describe the parts to show that they couldn not have the properties of the whole.  

 </P>  
<HR><HR><A NAME="144"></A><H2>144.  Ad Logicam </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Relating to Structure :: Division :: Ad Logicam </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argumentum Ad Logicam

   </LI> <LI> Fallacy Fallacy

</LI> </UL> <H4>Description</H4>                

   <P> Argument that a proposition is false because it has been presented as the conclusion of a fallacious argument.  

 </P>  
<HR><HR><A NAME="145"></A><H2>145.  Non Causa Pro Causa </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Non Causa Pro Causa </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> False Cause

</LI> </UL> <H4>Description</H4>                

   <P> What is common to all false-cause fallacies is that their conclusions are causal claims which are inadequately supported by their premises.  

 </P>  
<HR><HR><A NAME="146"></A><H2>146.  Cum Hoc Ergo Propter Hoc </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Non Causa Pro Causa :: Cum Hoc Ergo Propter Hoc </I> ]</UL>
<H4>Description</H4>                

   <P> Fallacious inference that a causal relationship exists merely because two events happenned together.

</P>  <H4>Examples</H4><UL>            

   <LI> Literacy rates have steadily declined since the advent of television.  Clearly television viewing impedes learning.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> The begger a child's shoe size, the better the child's handwriting.  So, having big feet must make it easier to write.  

 </LI> </UL> 
<HR><HR><A NAME="147"></A><H2>147.  Post Hoc Ergo Propter Hoc </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Non Causa Pro Causa :: Post Hoc Ergo Propter Hoc </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> post hoc

</LI> </UL> <H4>Description</H4>                

   <P> Fallacious inference that, a causal relationship exists merely because one event happenned before the other.

</P>  <H4>Examples</H4><UL>            

   <LI> Immigration to Alberta from Ontario increased.  Soon after, the welfare rolls increased.  Therefore, the increased immigration caused the increased welfare rolls.

   </LI> <LI> I took EZ-No-Cold, and two days later, my cold disappeared.

</LI> </UL> <H4>Counter-Example</H4><UL>            

   <LI> Roosters crow just before the sun rises.  So, roosters crowing cause the sun to rise.

</LI> </UL> <H4>Exposition</H4>                

   <P> Show that the correlation is coincidental by showing that:  1.  the effect would have occurred even if the cause did not occur, or 2. that the effect was caused by something other than the suggested cause.  

 </P>  
<HR><HR><A NAME="148"></A><H2>148.  Joint Effect </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Non Causa Pro Causa :: Joint Effect </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> One thing is held to cause another when in fact both are the effect of a single underlying cause.  This fallacy is often understood as a special case of post hoc ergo prompter hoc.

</P>  <H4>Examples</H4><UL>            

   <LI> We are experienceing high unemployment which is being caused by a low consumer demand.  (In fact, both may be caused by high interest rates.)

   </LI> <LI> You have a fever and this is causing you to break out in spots.  (In fact, both symptoms are cause by the measles).

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the two effects and show that they are caused by the same underlying cause.  It is necessary to describe the underlying cause and prove that it causes each symptom.  

 </P>  
<HR><HR><A NAME="149"></A><H2>149.  Wrong Direction </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Non Causa Pro Causa :: Wrong Direction </I> ]</UL>
<H4>Description</H4>                

   <P> The relation between cause and effect is reversed.

</P>  <H4>Examples</H4><UL>            

   <LI> Cancer causes smoking.

   </LI> <LI> The increase in AIDS was caused by more sex education.

</LI> </UL> <H4>Exposition</H4>                

   <P> Give a causal argument showing that the relation between cause and effect has been reversed.  

 </P>  
<HR><HR><A NAME="150"></A><H2>150.  Complex Cause </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Non Causa Pro Causa :: Complex Cause </I> ]</UL>
<H4>Alternate Name</H4><UL>            

   <LI> Oversimplified Cause

</LI> </UL> <H4>Description</H4>                

   <P> The effect is caused by a number of objects or events, of which the cause is identified is only a part.  A variation of this is the feedback loop where the effect is itself a part of the cuase.

</P>  <H4>Examples</H4><UL>            

   <LI> The accident was caused by the poor location of the bush.  (True, but it wouldn't have occurred had the driver not been drunk and the pedestrian not been jaywalking)

   </LI> <LI> The Challenger explosion was caused by the cold weather.  (True, however, it would not have occurred had the O-rings been properly constructed.)

</LI> </UL> <H4>Exposition</H4>                

   <P> Show that all the causes, and not just the one mentioned, are required to produce the effect.  

 </P>  
<HR><HR><A NAME="151"></A><H2>151.  Genuine but Insignificant Cause </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Non Causa Pro Causa :: Complex Cause :: Genuine but Insignificant Cause </I> ]</UL>
<H4>Description</H4>                

   <P> The object or event identified as the cuase of an effect is a genuine cause, but insignificant when compared to the other causes of that event.  Note that this fallacy does not apply when all other contributing causes are equally insignificant.  Thus, it is not a fallacy to say that you helped cause defeat the Tory government because you voted Reform, for your vote had as much weight as any other vote, and hence is equally a part of the cause.

</P>  <H4>Examples</H4><UL>            

   <LI> Smoking is causing air pollution in Edmonton.  (True, but the effect of smoking is insignificant compared to the effect of auto exhaust.)

   </LI> <LI> By leaving your oven on overnight you are contributing to global warming.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the much more significant cause.  

 </P>  
<HR><HR><A NAME="152"></A><H2>152.  Faulty Analogy </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Faulty Analogy </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Weak Analogy

</LI> </UL> <H4>Description</H4>                

   <P> Inductive fallacy associated with analogical reasoning.  (Not all analogical reasoning is falty).

</P>  <H4>Examples</H4><UL>            

   <LI> Employees are like nails.  Just as nails must be hit in the head in order to make them work; so must employees.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the two objects or events being compared and the property which both are said to possess.  Show that the two objects are diferent in a way which will affect whether they both have that property.  

 </P>  
<HR><HR><A NAME="153"></A><H2>153.  Hasty Generalization </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Hasty Generalization </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Converse Accident

   </LI> <LI> Unrepresentative Sample

   </LI> <LI> Biased Sample

</LI> </UL> <H4>Description</H4>                

   <P> Occurs when you form a general rule by examining a small sample which isn't representative of all possible cases.

</P>  <H4>Examples</H4><UL>            

   <LI> Fred, the Australian, stole my wallet.  Thus, all Australians are thieves.

   </LI> <LI> The apples on the top of the box look good.  The entire box of apples must be good.

   </LI> <LI> To see how Canadians will vote in the next election we polled a hundred people in Calgary.  This shows conclusively that the Reform Party will sweep the polls.

   </LI> <LI> Because we allow terminally ill patients to use heroin, we should allow everyone to use heroin.

   </LI> <LI> Because you allowed Jill, who was hit by a truck, to hand in her assignment late, you should allow the entire class to hand in their assignments late.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the size of the sample and the size of the population, then show that the sample size is too small.  Note: a formal proof would require a mathematical calculation.  This is the subject of probability theory.  For now, you must rely on common sense.

   </P> <P> <B>OR</B> Show how the sample is relevantly different from the population as a whole, then show that because the sample is different, the conclusion is probably different.

   </P> <P> <B>OR</B> Identify the generalization in question and show how the special case was an exception to the generalization.  

 </P>  
<HR><HR><A NAME="154"></A><H2>154.  Gambler's Fallacy </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Gambler's Fallacy </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> The Monte Carlo Fallacy

</LI> </UL> <H4>Description</H4>                

   <P> An argument of the following form (in reference to something for which each event is independant of successive/preceding events)

<PRE>
	x has not occurred recently.
<B>&#8756;</B>	x is likely to happen soon.
</PRE>  

 </P>  
<HR><HR><A NAME="155"></A><H2>155.  Suppressed Evidence </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Suppressed Evidence </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Missing Premise

   </LI> <LI> Card Stacking

   </LI> <LI> One-Sided Assessment

   </LI> <LI> Slanting

   </LI> <LI> One-Sidedness

</LI> </UL> <H4>Description</H4>                

   <P> Leaving a key premise out of the argument while creating the illusion that nothing more is needed to establish the ocnclusion.

   </P> <P> Such an argument presents only evidence favoring its conclusion and ignores or downplays evidence against it.

</P>  <H4>Examples</H4><UL>            

   <LI> Murder is morally wrong.  This being the case, it follows that abortion is morally wrong.  (Question begged:  How do you know that abortion is a form of murder?)

   </LI> <LI> Of course huans and pares evolved from common ancestors.  Just look how similar they are.  (Question begged:  Does the mere fact that humans and apes look similar imply that they evolved from common ancestors?)  

 </LI> </UL> 
<HR><HR><A NAME="156"></A><H2>156.  Sunk-cost fallacy </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Sunk-cost fallacy </I> ]</UL>
<H4>Examples</H4>                

<PRE>
I can't stop now, otherwise what I've invested so far will be lost.

This may be true, but is irrelevant.
</PRE>  

  
<HR><HR><A NAME="157"></A><H2>157.  Texas-Sharpshooter Fallacy </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Texas-Sharpshooter Fallacy </I> ]</UL>
<H4>Description</H4>                

   <P> The name epidemiologists give to the clustering illusions.  Politicians, lawyers and some scientists tend to isolate clusters of diseases from their context, thereby giving the illusions of a causal connection between some environmental factor and the disease.  

 </P>  
<HR><HR><A NAME="158"></A><H2>158.  Argumentum ex silentio </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Induction :: Argumentum ex silentio </I> ]</UL>
<H4>Description</H4>                

   <P> Argument that the silence of a speaker of writer about X proves or suggests that the speaker or writer is ignorant of X.  

 </P>  
<HR><HR><A NAME="159"></A><H2>159.  Fallacies of Semantics </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Semantics </I> ]</UL>
<H4>Description</H4>                

   <P> Fallacies where a word or phrase is used unclearly or has multiple meanings.

   </P> <P> Ambiguous or Vague statements or words.  

 </P>  
<HR><HR><A NAME="160"></A><H2>160.  Equivocation </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Semantics :: Equivocation </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Ambiguous Middle Term

   </LI> <LI> Four-Term Fallacy

   </LI> <LI> Quaternio Terminorum

</LI> </UL> <H4>Description</H4>                

   <P> The same word or phrase is used in an argument with two different meanings.  Ambiguity generates fallacies when the meaning of an expression shifts during the course of an argument, causing a misleading appearance of validity.

</P>  <H4>Examples</H4><UL>            

   <LI> Only man is born free, and no women are men, therefore, no women are born free.  (man is used first meaning 'humanity' then meaning 'male')

   </LI> <LI> All human fetuses are human.  Any human has the right to life.  So, all human fetuses have the right to life.

   </LI> <LI> All Republicans are democrats.  All democrats are liberals.  So, all Republicans are liberals.

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> The humanity of the patient's appendix is medially undeniable.  So, the appendis has a right to life and should not be surgically removed.

   </LI> <LI> The sign said, "fine for parking here", and since it was fine, I parked there.

   </LI> <LI> All child-murderers are inhuman, thus, no child murderer is human.

   </LI> <LI> A plane is a carpenter's tool, and the Boeing 737 is a plane, hence the Boeing 737 is a carpenter's tool.

</LI> </UL> <H4>Exposition</H4><UL>            

   <LI> Identify the word which is used twice, then show that a definition which is appropriate for one use of the word would not be appropriate for the second use.

   </LI> <LI> Use the counterexample technique:

   </LI> <LI> All dog organs are canine.  Any canine must be on a leash.  So, all dog organs must be on a leash.  

 </LI> </UL> 
<HR><HR><A NAME="161"></A><H2>161.  Amphiboly </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Semantics :: Amphiboly </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Amphibology

</LI> </UL> <H4>Description</H4>                

   <P> Ambiguity at the level of sentence structure (not traceable to any particular word but to how the words are assembled).

</P>  <H4>Examples</H4><UL>            

   <LI> The anthropologists went to a remote area and took photographs of some native women, but they weren't developed. (pronoun ambiguity)

   </LI> <LI> One morning I shot an elephant in my pajamas.  How he got into my pajamas I'll never know. (modifier ambiguity)

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the ambiguous phrase and show the two possible interpretations.  

 </P>  
<HR><HR><A NAME="162"></A><H2>162.  Scope Fallacy </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Semantics :: Amphiboly :: Scope Fallacy </I> ]</UL>
<H4>Alternate Names</H4><UL>            

</UL> <H4>Description</H4>                

   <P> Limiting words.

</P>  <H4>Examples</H4><UL>            

   <LI> All that glitters is not gold.  This rock glitters.  So, this rock is not gold.

<PRE>
   The first proposition has two meanings
	- narrow: non-gold.
	- broad: It is not the case that, all that glitters is gold.
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="163"></A><H2>163.  Narrow Scope </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Semantics :: Amphiboly :: Scope Fallacy :: Narrow Scope </I> ]</UL>
<H4>Description</H4>                

   <P> The word refers only to the word it precedes.  

 </P>  
<HR><HR><A NAME="164"></A><H2>164.  Broad Scope </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Semantics :: Amphiboly :: Scope Fallacy :: Broad Scope </I> ]</UL>
<H4>Description</H4>                

   <P> The word refers to the verb (whole sentence).  

 </P>  
<HR><HR><A NAME="165"></A><H2>165.  Quantifier Shift </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Semantics :: Amphiboly :: Quantifier Shift </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Illicit Quantifier Shift

</LI> </UL> <H4>Description</H4>                

   <P> A fallacy of the form:

<PRE>
	.Ap. Rpq
<B>&#8756;</B>	.Eq. Rqp
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> Every event must have a cause.  So, there must be a cause of every event, that is, a first cause which we call "God".

</LI> </UL> <H4>Counter-Examples</H4><UL>            

   <LI> Everybody loves someone.  Therefore, there is somebody whom everybody loves.  

 </LI> </UL> 
<HR><HR><A NAME="166"></A><H2>166.  Accent </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Semantics :: Accent </I> ]</UL>
<H4>Alternate Names</H4><UL>            

</UL> <H4>Description</H4>                

   <P> Emphasis is used to suggest a meaning different from the actual content of the proposition.

   </P> <P> (My note) What about a TV commercial, someone advocating a product and smiling as if to say, "This product will make you feel happy, your life more secure".

</P>  <H4>Examples</H4><UL>            

   <LI> The first mate, seeking revent on the captain, wrote in his journal, "The captain was sober today."  (Implying the captain is usually drunk)

   </LI> <LI> Newspaper headlines

   </LI> <LI> contractual fine print

   </LI> <LI> commercial "giveaways" and deceptive contest entry forms are frequent sources of fallacies  of accent.  

 </LI> </UL> 
<HR><HR><A NAME="167"></A><H2>167.  Quoting Out of Context </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Fallacies of Semantics :: Quoting Out of Context </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Abstraction

</LI> </UL> <H4>Description</H4>                

   <P> Two primary forms:  Straw Man, and Appeal to Authority.

</P>  <H4>Examples</H4>                

   <P> Have the various fossil candidates for a place in our human ancestry stood the test of time?  One by one, variaous fossil man finds have flashed across the front pages of the newspapers and been the subeject of many scientific studies and reports, only to be at least either discredited or just forgotten, replaced by newer finds which also eventually fade away.  In 1981 British scientist John Reader commented on this Hollywood character of some of our former alleged ancestors:  "Not many (if any) [fossil hominids] have held the stage for long; by now laymen could be forgiven for regarding each new arrival as no less ephemeral than the weather forecast..."  

 </P>  
<HR><HR><A NAME="168"></A><H2>168.  Slothful Induction </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Slothful Induction </I> ]</UL>
<H4>Description</H4>                

   The inductively strong conclusion of an inductive argument is denied despite the evidence to the contrary.  

  
<HR><HR><A NAME="169"></A><H2>169.  Ignoratio Elenchi </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: Ignoratio Elenchi </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Missing the Point

</LI> </UL> <H4>Description</H4>                

   <P> The premises of an argument warrant a different conclusion from the one the arguer draws.

</P>  <H4>Examples</H4><UL>            

   <LI> Any amount of inflation is bad for the economy.  Lastmonth inflation was running at an anual rate of 10 percent.  This month the inflation rate is only 7 percent.  the economy is on the upswing.  

 </LI> </UL> 
<HR><HR><A NAME="170"></A><H2>170.  non sequitur </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Arguments :: Inferential Fallacies :: non sequitur </I> ]</UL>
<H4>Description</H4>                

   <P>A conclusion that does not follow from the given premises.  

 </P>  
<HR><HR><A NAME="171"></A><H2>171.  Fallacies of Explanations </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Explanations </I> ]</UL>
<H4>Description</H4>                

   <P> An explanation is a form of reasoning which attempts to answer the question "why?" For example, it is with an explanation that we answer the questions such as, "Why is the sky blue?"  

 </P>  
<HR><HR><A NAME="172"></A><H2>172.  Subverted Support </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Explanations :: Subverted Support </I> ]</UL>
<H4>Description</H4>                

   <P> An explanation is intended to explain why some phenomenon happens.  The explanation is fallacious if the phenomenon does not actually happen or if there is no evidence that it does happen.

</P>  <H4>Examples</H4><UL>            

   <LI> The reason why most bachelors are timid is that their mothers were domineering.

   </LI> <LI> John went to the store because he wnted to see Maria.

   </LI> <LI> The reason why most people oppose the strike is that they are afraid of losing their jobs.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the phenomenon which is being explained.  Show that there is no reason to believe that the phenomenon has actually occurred.  

 </P>  
<HR><HR><A NAME="173"></A><H2>173.  Non-support </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Explanations :: Non-support </I> ]</UL>
<H4>Description</H4>                

   <P>An explanation is intended to explain why some phenomenon happens.  In this case, there is evidence that the phenomenon occurred, but it is trumped up, biased or ad hoc evidence.

</P>  <H4>Examples</H4><UL>            

   <LI> The reason why most bachelors are timid is that their mothers were domineering.

   </LI> <LI> The reason why I get four or better on my evaluations is that my students love me.

   </LI> <LI> The reason why Alberta has the lowest tuition in Canada is that tuition hikes have lagged behind other provinces.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the phenomenon which is being explained.  Show that the evidence advanced to support the existence of the phenomenon was manipulated in some way.  

 </P>  
<HR><HR><A NAME="174"></A><H2>174.  Limited Scope </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Explanations :: Limited Scope </I> ]</UL>
<H4>Description</H4>                

   <P> The theory doesn't explain anything other than the phenomenon it explains.

</P>  <H4>Examples</H4><UL>            

   <LI> There was hostility toward hippies in the 1960s because of their parents' resentment toward children.

   </LI> <LI> People get schizophrenia because different parts of their brains split apart.

</LI> </UL> <H4>Exposition</H4>                

   <P> Identify the theory and phenomenon it explains.  Show that the theory does not explain anything else.  Argue that theories which explain only the phenomenon are likely to be incomplete, at best.  

 </P>  
<HR><HR><A NAME="175"></A><H2>175.  Limited Depth </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Explanations :: Limited Depth </I> ]</UL>
<H4>Description</H4>                

   <P> Theories explain phenomena by appealing to some underlying cause or phenomena.  Theories which do not appeal to an underlying cause, and instead simply appeal to membership in a category, commit the fallacy of limited depth.

</P>  <H4>Examples</H4><UL>            

   <LI> My cat likes tuna because she's a cat.

   </LI> <LI> Ronald Reagan was militaristic because he was American.

   </LI> <LI> You're just saying that because you belong to the union.

</LI> </UL> <H4>Exposition</H4>                

   <P> Theories of this sort attempt to explain a phenomenon by showing that it is part of a category of similar phenomenon.  Accept this, then press for an explanation of the wider category of phenomenon.  Argue that a theory refers to a cause, not a classification.  

 </P>  
<HR><HR><A NAME="176"></A><H2>176.  Anecdotal Evidence </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Explanations :: Anecdotal Evidence </I> ]</UL>
<H4>Description</H4>                

   <P> Using an anecdote as proof.

</P>  <H4>Examples</H4><UL>            

   <LI> "There's abundant proof that god exists and is still performing miracles today.  Just last week I read about a girl who was dying of cancer.  Her whold family went to church and prayed for her, and she was cured."

   </LI> <LI> Urban Legends  

 </LI> </UL> 
<HR><HR><A NAME="177"></A><H2>177.  Ad Hoc </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: Fallacies of Explanations :: Ad Hoc </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Regressive Fallacy

   </LI> <LI> The Regression Fallacy

</LI> </UL> <H4>Description</H4>                

   <P> The Ad Hoc fallacy is to give an after-the-fact explanation which doesn't apply to other situations.  Often it will be disguised to resemble an argument.

</P>  <H4>Examples</H4>                

   <P> Assuming god treats everyone equally, then the following is an Ad Hoc explanation:

<PRE>
- "I was healed from cancer."
- "Praise the lord, then.  He is your healer."
- "So, will he heal others who have cancer?"
- "Er... The ways of god are mysterious."
</PRE>  

 </P>  
<HR><HR><A NAME="178"></A><H2>178.  ORR Analysis </H2><UL>[ <I> Argument :: Refutation :: Fallacies :: ORR Analysis </I> ]</UL>
<H4>ORR</H4>                
   <P><B>Object Role Requirements</B>

   </P> <P> These fallacies are arranged in a table of three main collumns:  Object, Role and Requirements.  The Object column enumerates  the kinds of objects in an argument.  The Role column enumerates the possible roles for each of those objects.  The Requirements column list all requirements for each Object-Role.  A requirement is written as a 'shall ...' statement.  A fourth collum listing appropriate fallacies may also appear in the table.  (The ORR table may also be arranged as a hierarchy of three levels.)  The ORR system works by first identifying an object, follwed by identifying its role in the argument then listing requirements for that particular object-role.  If an object does not fulfill the listed requirements, then the entire argument is fallacious.  

   </P> <P>The ORR system enumerates three objects in arguments, the proposition, the proposition set and the inference.  Below is the hierarchial list of the roles of these two objects.

<PRE>
Proposition:

   Premise
        Every Premise
        ...of Evidence
        ...of Definition
   Conclusion

Proposition Set:

        The Complete Set of Premises

Inference:

   Every Inference
   Deduction
   Induction
        infer attributes
             ...by composition/division
             ...by categorization
             ...by analogy
        infer cause/effect relation
        infer statistical prediction
</PRE>

</P>  <H4>Notes</H4><UL>            

   <LI> In the following hierarchy, requirements are preceded by 'Req:'

   </LI> <LI> In ORR, the them fundamental may be thought of as meaning not circular.  Fundamental implies that some reasoning must occur to move from a premise to a conclusion.  

 </LI> </UL> 
<HR><HR><A NAME="179"></A><H2>179.  'Real Life' Argumentation </H2><UL>[ <I> Argument :: 'Real Life' Argumentation </I> ]</UL>
<H4>Description</H4>                

   <P> This section is more concerned with live face-to-face argumentation.  

 </P>  
<HR><HR><A NAME="180"></A><H2>180.  Hierarchy of Logic & Argumentation </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Hierarchy of Logic & Argumentation </I> ]</UL>
<H4>Description</H4>                

   <P> This hierarchy shows where Logic & Argumentation come in the scheme of things.

</P>  <H4>Hierarchy</H4><UL>            

   <LI> Argumentation is a form of Dialectic.

   </LI> <LI> Dialectic is a form of Rhetoric.

   </LI> <LI> Rhetroic is a form of Discourse.  

 </LI> </UL> 
<HR><HR><A NAME="181"></A><H2>181.  Discourse </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Hierarchy of Logic & Argumentation :: Discourse </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A sequence of connected sentences.  Such as the development of a thought or conversation.  As opposed to a bunch of sentences that have no relation to oneanother.  

 </P>  
<HR><HR><A NAME="182"></A><H2>182.  Rhetoric </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Hierarchy of Logic & Argumentation :: Rhetoric </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4><OL>            

   <LI> The development and communiction of knowledge between speekers and listeners.

   </LI> <LI> The study of how messages influence people.

</LI> </OL> <H4>Notes</H4>                

   <P> From this second definition we can see that argumentation is a subset of rhetoric being the study of  how people are influenced by reason-giving (That is, argumentation may be defined of as reason-giving).

   </P> <P> Thinking Rhetorically  requires a certain frame of mind.

</P>  <H4></H4><OL>            

   <LI> Thinking in terms of an audience.

   </LI> <LI> Requires acknowledging audience predispositions and reasoning with them in mind.

   </LI> <LI> Requires awareness of choice by speakers and listeners.

   </LI> <LI> Requires recognizing that influence is noncoercive.  

 </LI> </OL> 
<HR><HR><A NAME="183"></A><H2>183.  Parts </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Hierarchy of Logic & Argumentation :: Rhetoric :: Parts </I> ]</UL>
<H4>Description</H4>                

   <P> Ramus divided the first two into argument and concern with the truth which he said was philosophy.  And put the last three into presentation which he said was rhetoric  

 </P>  
<HR><HR><A NAME="184"></A><H2>184.  invention </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Hierarchy of Logic & Argumentation :: Rhetoric :: Parts :: invention </I> ]</UL>
<H4>Description</H4>                

   <P> Discovering and selecting the facts to be used in making your case.  

 </P>  
<HR><HR><A NAME="185"></A><H2>185.  arrangement </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Hierarchy of Logic & Argumentation :: Rhetoric :: Parts :: arrangement </I> ]</UL>
<H4>Description</H4>                

   <P> Orgainzing the facts selected by 'invention'.  

 </P>  
<HR><HR><A NAME="186"></A><H2>186.  style </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Hierarchy of Logic & Argumentation :: Rhetoric :: Parts :: style </I> ]</UL>
<H4>Description</H4>                

   <P> How you present the argument.  Choices of language, phrasology, etc.  

 </P>  
<HR><HR><A NAME="187"></A><H2>187.  memory </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Hierarchy of Logic & Argumentation :: Rhetoric :: Parts :: memory </I> ]</UL>
<H4>Description</H4>                

   <P> Keeping in mind what one was about to say.  

 </P>  
<HR><HR><A NAME="188"></A><H2>188.  delivery </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Hierarchy of Logic & Argumentation :: Rhetoric :: Parts :: delivery </I> ]</UL>
<H4>Description</H4>                

   <P> Physical presentation.  The use of the voice (intonation), the body (body language), etc.  

 </P>  
<HR><HR><A NAME="189"></A><H2>189.  Dialectic </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Hierarchy of Logic & Argumentation :: Dialectic </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4><OL>            

   <LI> The grand sweep of opposing historical forces, such as the clash between capitalism and communism.

   </LI> <LI> The process of discovering and testing knowledge through questions and answers.  

 </LI> </OL> 
<HR><HR><A NAME="190"></A><H2>190.  Argumentation </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Hierarchy of Logic & Argumentation :: Argumentation </I> ]</UL>
<H4>Description</H4>                

   Here, argumentation means verbal disputes among two or more parties.  

  
<HR><HR><A NAME="191"></A><H2>191.  Logic </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Hierarchy of Logic & Argumentation :: Logic </I> ]</UL>
<H4>Description</H4>                

   <P> Logic is mainly concerned with one side of an argument.  Such an argument <I>may</I> be face-to-face, but may also be more rhetorical such as that in a book, research paper or lecture.  

 </P>  
<HR><HR><A NAME="192"></A><H2>192.  Uses of Language </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Emergence :: Context :: Uses of Language </I> ]</UL>
<H4>Overview</H4>                

   <P> This is a very broad set of categories describing the uses of language.  Most uses of language are a complex blend of two or more of these.  

 </P>  
<HR><HR><A NAME="193"></A><H2>193.  Cognative (Informative) </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Emergence :: Context :: Uses of Language :: Cognative (Informative) </I> ]</UL>
<H4>Properties</H4>                

   <P> Used to communicate information (and misinformation).

</P>  <H4>Examples</H4>                

   <P> Affirmation or denial of propositions, presentation of arguments.  

 </P>  
<HR><HR><A NAME="194"></A><H2>194.  Emotive (Expressive) </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Emergence :: Context :: Uses of Language :: Emotive (Expressive) </I> ]</UL>
<H4>Properties</H4>                

   <P> Language used to vent or arouse feelings.

   </P> <P> Most statements with emotive meaning also contain cognative elements.  Since logic is concerned chiefly with cognitive meaning, it's important that we be able to distinguish and disengage the cognitive meaning from the emotive.

</P>  <H4>Examples</H4><UL>            

   <LI> Poetry, anger, expression of joy.  

 </LI> </UL> 
<HR><HR><A NAME="195"></A><H2>195.  Value Claim </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Emergence :: Context :: Uses of Language :: Emotive (Expressive) :: Value Claim </I> ]</UL>
<H4>Description</H4>                

   <P> Many statements with emotive meaning contain <I> value claims </I>.  A value claim is a judgement such as good, bad, right, wrong, better, worse, more important or less important.

   </P> <P> Value claims are often the most importantpart of the cognitive meaning of emotive statements.  It is often useful to disengage  the value claims of emotively charged statements from the emotive meaning and treat these claims as separate statements.  

 </P>  
<HR><HR><A NAME="196"></A><H2>196.  Directive (Imperative) </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Emergence :: Context :: Uses of Language :: Directive (Imperative) </I> ]</UL>
<H4>Properties</H4>                

   <P> Language functioning to cause or prevent overt action.

</P>  <H4>Examples</H4><UL>            

   <LI> Commands & requests  

 </LI> </UL> 
<HR><HR><A NAME="197"></A><H2>197.  Types of Controversy </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Emergence :: Sources :: Types of Controversy </I> ]</UL>
<H4>Overview</H4>                

   <P> This section outlines the various causes of disputes.  Disputes may arise among any one of the uses of language.  

 </P>  
<HR><HR><A NAME="198"></A><H2>198.  Disputes of Fact (Belief) </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Emergence :: Sources :: Types of Controversy :: Conative :: Disputes of Fact (Belief) </I> ]</UL>
<H4>Description</H4>                

   <P> Agree or disagree about whether something has in fact taken place or wheather something is true.  Most of logic is concerned with this sort of dispute.  

 </P>  
<HR><HR><A NAME="199"></A><H2>199.  Disputes of Definition (Verbal) </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Emergence :: Sources :: Types of Controversy :: Conative :: Disputes of Definition (Verbal) </I> ]</UL>
<H4>Description</H4>                

   <P> A verbal dispute is where two or more parties argue about a subject, but talk past eachother because they don't realize that they actually don't agree upon the definition of a word.  

 </P>  
<HR><HR><A NAME="200"></A><H2>200.  Ambiguity </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Emergence :: Sources :: Types of Controversy :: Conative :: Disputes of Definition (Verbal) :: Ambiguity </I> ]</UL>
<H4>Description</H4>                

   <P> The term at the root of the dispute has more than one DISTINCT meaning.  Each party is arguing with a different meaning in mind.  

 </P>  
<HR><HR><A NAME="201"></A><H2>201.  Vagueness </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Emergence :: Sources :: Types of Controversy :: Conative :: Disputes of Definition (Verbal) :: Vagueness </I> ]</UL>
<H4>Description</H4>                

   <P> The term at the root of the dispute has many shades of meaning.  Each party is arguing from a different point of view.

</P>  <H4>Examples</H4>                

   <P> Jane:  The baby rhino is small.
   </P> <P> John:  200 lbs is not small.

  

 </P>  
<HR><HR><A NAME="202"></A><H2>202.  Disputes of Value (Attitude) </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Emergence :: Sources :: Types of Controversy :: Emotive :: Disputes of Value (Attitude) </I> ]</UL>
<H4>Description</H4>                

   <P> Opinion of approval or disapproval toward an event.  

 </P>  
<HR><HR><A NAME="203"></A><H2>203.  Disputes of Policy </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Emergence :: Sources :: Types of Controversy :: Directive :: Disputes of Policy </I> ]</UL>
<H4>Description</H4>                

   <P> Disagreement about what should be done or how something should be done.  

 </P>  
<HR><HR><A NAME="204"></A><H2>204.  Conditions for Argumentation </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Emergence :: Conditions :: Sources of Controversy :: Conditions for Argumentation </I> ]</UL>
<H4>Description</H4>                

   <P> People argure when the following conditions are met:

</P>  <H4>Conditions</H4><UL>            

   <LI> Some disagreement exists between parties.

   </LI> <LI> All parties consider the controversy non-trivial.

   </LI> <LI> The assent of the other party is desired.

   </LI> <LI> Assent is desired only if it is freely given.

   </LI> <LI> No easier means to resolve the disagreement  (e.g. Cannot use emprical methods; Cannot consult a universally recognized authority; Cannot deduce the resolution from what's already known)  

 </LI> </UL> 
<HR><HR><A NAME="205"></A><H2>205.  Resolution </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Issues and Resolutions :: Resolution </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4><OL>            

   <LI> A statement capturing the substance of the controversy.
   </LI> <LI> If we imagine an answer to the question posed by the controversy, that answer is the resolution.

</LI> </OL> <H4>Notes</H4><UL>            

   <LI> If the resolution can only be stated in multiple sentences or a single very complex sentence, it may be a sign that the participants are not engaging each other.  They may each be addressing a different aspects.  

 </LI> </UL> 
<HR><HR><A NAME="206"></A><H2>206.  Issues </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Issues and Resolutions :: Issues </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Issues are the questions that are inherent in a controversy and vital to the success of the resolution.
   </P> <P> Issues are the questions that are asked by a resolution in order to prove it.

</P>  <H4>Examples</H4>                

 <H4>Resolution</H4><UL>            

   <LI> Interfaith marriages generally fail.

</LI> </UL> <H4>Issues</H4><UL>            

   <LI> What counts as an interfaith marriage?

   </LI> <LI> What do we mean by succeed?

   </LI> <LI> How do we know if it succeeds?

   </LI> <LI> etc.  

 </LI> </UL> 
<HR><HR><A NAME="207"></A><H2>207.  Claims </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Issues and Resolutions :: Claims </I> ]</UL>
<H4>Description</H4>                

   <P> The proposed intermediary or sub-resolution for each issue.  

 </P>  
<HR><HR><A NAME="208"></A><H2>208.  Topoi </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Issues and Resolutions :: Topoi </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4><OL>            

   <LI> Stock issues.

   </LI> <LI> Shortcuts to locating issues of a controversey

   </LI> <LI> Related to the kind of controversy/resolution  

 </LI> </OL> 
<HR><HR><A NAME="209"></A><H2>209.  For resolutions of Fact (Belief) </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Issues and Resolutions :: Topoi :: For resolutions of Fact (Belief) </I> ]</UL>
<H4></H4><UL>            

   <LI> What is the criterion for assessing the truth?

   </LI> <LI> Has the criterion been satisfied?  

 </LI> </UL> 
<HR><HR><A NAME="210"></A><H2>210.  For resolutions of Value (Attitude) </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Issues and Resolutions :: Topoi :: For resolutions of Value (Attitude) </I> ]</UL>
<H4></H4><UL>            

   <LI> Is the value truly good or bad as alleged?

   </LI> <LI> Which among competing values should be preferred?

   </LI> <LI> Has the value been properly applied to the specific situation?  

 </LI> </UL> 
<HR><HR><A NAME="211"></A><H2>211.  For resolutions of Policy </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Issues and Resolutions :: Topoi :: For resolutions of Policy </I> ]</UL>
<H4></H4><UL>            

   <LI> Is there a problem?

   </LI> <LI> Where is credit or blame due?

   </LI> <LI> Will the proposal solve the problem?

   </LI> <LI> On balance, will the proposal be better?  

 </LI> </UL> 
<HR><HR><A NAME="212"></A><H2>212.  For resolutions of Definition (Verbal) </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Issues and Resolutions :: Topoi :: For resolutions of Definition (Verbal) </I> ]</UL>
<H4></H4><UL>            

   <LI> Is the interpretation relevant?

   </LI> <LI> Is it fair?

   </LI> <LI> How should we choose among competing interpretations?  

 </LI> </UL> 
<HR><HR><A NAME="213"></A><H2>213.  Stasis </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Stasis </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The focal point of a dispute.

</P>  <H4>Notes</H4>                

   <P> Stasis is determined by the response to the original assertion.  

 </P>  
<HR><HR><A NAME="214"></A><H2>214.  Stasis in place </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Stasis :: Stasis in place </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Concerns whether the discussion is occurring in the proper forum.

</P>  <H4>Examples</H4><UL>            
   <LI>
<PRE>
	Person A:		You stole my car.
	Person B:		Hey, don't accuse me of theft out here on the street!  Theft is a criminal offense.  If you've got a case, prosecute!  I'll see you in court.
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> stasis of place is preemptive.  It does not take part in the progressive nature of the other three.  

 </LI> </UL> 
<HR><HR><A NAME="215"></A><H2>215.  Stasis in conjecture </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Stasis :: Stasis in conjecture </I> ]</UL>
<H4>Definitions</H4>                

   <P> Concerns whether a fact is true.

</P>  <H4>Examples</H4><UL>            
   <LI>
<PRE>
	Person A:		You stole my car.
	Person B:		No, I never even had posession of it.
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Stasis of definition concedes stasis of place.  

 </LI> </UL> 
<HR><HR><A NAME="216"></A><H2>216.  Stasis in definition </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Stasis :: Stasis in definition </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Concerns what a fact should be called.

</P>  <H4>Examples</H4><UL>            
   <LI>
<PRE>
	Person A:		You stole my car.
	Person B:		No, I only borrowed it.
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Stasis of definition concedes stasis of place and conjecture.  

 </LI> </UL> 
<HR><HR><A NAME="217"></A><H2>217.  Stasis in quality </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Stasis :: Stasis in quality </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Concerns whether the act is justified.

</P>  <H4>Examples</H4><UL>            

   <LI>
<PRE>
	Person A:		You stole my car.
	Person B:		Yes, but it's a good thing I did because I used it to take to the hospital someone who had fallen on your sidewalk when you hadn't shoveled the snow and ice off, and you would've been liable for it if I hadn't.
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Stasis of quality concedes stasis of place, conjecture and definition.  

 </LI> </UL> 
<HR><HR><A NAME="218"></A><H2>218.  Presumption </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Presumption & Burden of Proof :: Presumption </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4><OL>            

   <LI> What we assume to be the case, unless and until shown otherwise.

   </LI> <LI> A description of the state of affairs before argumentation begins.

   </LI> <LI> Identifies the position that would prevail in the absence of controversy.

   </LI> <LI> A default condition.

</LI> </OL> <H4>Examples</H4><UL>            

   <LI>
<PRE>
	A:	You stole my car.
	B:	No, I didn't.

	A made the charge, so A has the responsibility to prove it.
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> The party who does not have the presumption, must initiate the dispute.  

 </LI> </UL> 
<HR><HR><A NAME="219"></A><H2>219.  Natural Presumptions </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Presumption & Burden of Proof :: Presumption :: Natural Presumptions </I> ]</UL>
<H4>Definiton</H4>                

   <P> Those presumptions that are just out there in the world.

</P>  <H4>Examples</H4>                

   <P> There is naturally a presumption against restrictions on freedom, because people are by nature free beeings.  And so, if we're going to restrict freedom, it's got to be justified.  

 </P>  
<HR><HR><A NAME="220"></A><H2>220.  Artificial Presumptions </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Presumption & Burden of Proof :: Presumption :: Artificial Presumptions </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Presumptions stipulated by convention.

</P>  <H4>Examples</H4>                

   <P> The presumption that a person is innocent until proven guilty.  

 </P>  
<HR><HR><A NAME="221"></A><H2>221.  Burden of Proof </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Presumption & Burden of Proof :: Burden of Proof </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4><OL>            

   <LI> The opposite of presumption.

   </LI> <LI> The ultimate responsibility to support one's position on the resolution.

</LI> </OL> <H4>Notes</H4>                

   <P> The party that lacks presumption is the side that has burden of proof.

   </P> <P> Burden of proof does not shift back and forth.  One party of a controversy always has the responsibility at the end of the controversy to establish that his or her position prevails.  The other party is the one that enjoys presumption.  

 </P>  
<HR><HR><A NAME="222"></A><H2>222.  Who has presumption </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Presumption & Burden of Proof :: Who has presumption </I> ]</UL>
<H4>Overview</H4><UL>            

   <LI> There's great strategic advantage in holding presumption.

   </LI> <LI> It is possible to either jockey for presumption or stipulate presumption.  

 </LI> </UL> 
<HR><HR><A NAME="223"></A><H2>223.  Burden of Rejoinder </H2><UL>[ <I> Argument :: 'Real Life' Argumentation :: Presumption & Burden of Proof :: Burden of Rejoinder </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The burden of going forward with the argument.

</P>  <H4>Examples</H4><UL>            

   <LI>
<PRE>
	A:  The federal government, rather than the states, ought to regulate the machinery of elections.  Doing so will avoid the problems of the 2000 election season.
	B:  Why should it be done?
		or
	B:  Well, it shouldn't be done.

	In either case, B is not advancing the argument and thus not meeting the burden of rejoinder.
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Burden of rejoinder does shift back and forth unlike burden of proof.  

 </LI> </UL> 
<HR><HR><A NAME="224"></A><H2>224.  Proposition </H2><UL>[ <I> Proposition </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> See <A HREF="#9" TARGET="baseframe">definition of proposition</A>
  

 </P>  
<HR><HR><A NAME="225"></A><H2>225.  Analysis </H2><UL>[ <I> Proposition :: Analysis </I> ]</UL>
<H4>See Also</H4><UL>            

   <LI> definition of <A HREF="#1277" TARGET="baseframe">Analyze</A>  

 </LI> </UL> 
<HR><HR><A NAME="226"></A><H2>226.  Structure </H2><UL>[ <I> Proposition :: Analysis :: Structure </I> ]</UL>
<H4>Description</H4>                

   <P> The most common way to analize a proposition is by its structure structure.  More detailed analyses of propositions are possible with the study of Deduction.  

   </P> <P> All propositions can be classified by structure.  We will consider these classifications here.  Every proposition can be classified into exactly one of these categories.
  

 </P>  
<HR><HR><A NAME="227"></A><H2>227.  Atomic Proposition </H2><UL>[ <I> Proposition :: Analysis :: Structure :: Atomic Proposition </I> ]</UL>


<H4>Alternate Names</H4><UL>            

   <LI> Simple Proposition

</LI> </UL> <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

<DIV CLASS="BLOCK">
   <P> A non-negated proposition which cannot be broken down into two or more composite propositions.
</P>

</DIV>  <H4>Examples</H4><UL>            

<DIV CLASS="GRAYBLOCK">
   <LI> A cat is a mammal.

   </LI> <LI> All men are mortal.

   </LI> <LI> x=y

   </LI> <LI> Earth is the third planet from the sun.
</LI>  

 </DIV> </UL> 
<HR><HR><A NAME="228"></A><H2>228.  Negated Proposition </H2><UL>[ <I> Proposition :: Analysis :: Structure :: Negated Proposition </I> ]</UL>


<H4>Description</H4>                

<DIV CLASS="BLOCK">
   <P> A proposition which denies a claim.
</P>

</DIV>  <H4>Examples</H4><UL>            

<DIV CLASS="GRAYBLOCK">
   <LI> <B>It is not the case that</B>, the sky is blue.

   </LI> <LI> <B>It is not the case that</B>, all men are mortal.
</LI>  

 </DIV> </UL> 
<HR><HR><A NAME="229"></A><H2>229.  Compound Proposition </H2><UL>[ <I> Proposition :: Analysis :: Structure :: Compound Proposition </I> ]</UL>


<H4>Alternate Names</H4><UL>            

   <LI> Molecular Proposition

</LI> </UL> <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

<DIV CLASS="BLOCK">
   <P> A proposition composed of two or more simpler propositions joined together by connecting words such as <B>and</B>, <B>or</B> and <B>if...then...</B>.  Most connecting words have one of these three meanings or can be rewritten in terms of them.  A more detailed study of compound propositions is presented in Deduction.
</P>

</DIV>  <H4>Examples</H4><UL>            

<DIV CLASS="GRAYBLOCK">
   <LI> The cat is fat <B>and</B> the dog is brown.

   </LI> <LI> <B>If</B> it rains, <B>then</B> the ground will be wet.

   </LI> <LI> <B>Either</B> you're with us <B>or</B> you're against us.
</LI>  

 </DIV> </UL> 
<HR><HR><A NAME="230"></A><H2>230.  Conjunction ('...and...') </H2><UL>[ <I> Proposition :: Analysis :: Structure :: Compound Proposition :: Conjunction ('...and...') </I> ]</UL>
<H4>Description</H4>                

<DIV CLASS="BLOCK">
   <P> Two or more propositions joined by 'and'.  A conjunctive proposition asserts that each component proposition is true by itself.
</P>

</DIV>  <H4>Examples</H4><UL>            

<DIV CLASS="GRAYBLOCK">
   <LI> The sky is blue <B>and</B> the trees are green.

   </LI> <LI> All men and dogs are mammals.<BR>
   Is the same as:  All men are mammals <B>and</B> all dogs are mammals.

   </LI> <LI> Tom is a happy, but Mary is not.<BR>
   Is the same as:  Tom is happy <B>and</B> it is not the case that Mary is happy.
</LI>  

 </DIV> </UL> 
<HR><HR><A NAME="231"></A><H2>231.  Disjunction ('...or...') </H2><UL>[ <I> Proposition :: Analysis :: Structure :: Compound Proposition :: Disjunction ('...or...') </I> ]</UL>
<H4>Description</H4>                

<DIV CLASS="BLOCK">
   <P> Two or more propositions joined by 'or'.  A disjunctive proposition asserts that at least one of the component propositions is true.
</P>

</DIV>  <H4>Examples</H4><UL>            

<DIV CLASS="GRAYBLOCK">
   <LI> a=b <B>or</B>, <B>it's not the case that</B> a=b.
</LI>  

 </DIV> </UL> 
<HR><HR><A NAME="232"></A><H2>232.  Conditions ('if...then...') </H2><UL>[ <I> Proposition :: Analysis :: Structure :: Compound Proposition :: Conditions ('if...then...') </I> ]</UL>
<H4>Description</H4>                

   <P> Two propositions joined by 'if...then...'.  A conditional proposition asserts that if the antecedent (first proposition) is true, then the truth of the consequence (second proposition) is assured.  Conditional propositions come in three forms.

</P>  <H4>Examples</H4><UL>            

<DIV CLASS="GRAYBLOCK">
   <LI> <B>If</B> it's raining <B>then</B> the ground is wet.
</LI>  

 </DIV> </UL> 
<HR><HR><A NAME="233"></A><H2>233.  Sufficient Conditions </H2><UL>[ <I> Proposition :: Analysis :: Structure :: Compound Proposition :: Conditions ('if...then...') :: Sufficient Conditions </I> ]</UL>


<H4>Form</H4><UL>            

   <LI> A is sufficient for B.

</LI> </UL> <H4>Semantics</H4><UL>            

   <LI> <B>If</B> A <B>then</B> B.

</LI> </UL> <H4>Description</H4>                

   <P> A proposition stating a sufficient condition is just a different way of stating an 'if...then...'.

</P>  <H4>Examples</H4><UL>            

<DIV CLASS="GRAYBLOCK">
   <LI> Being a cat is a <B>sufficient condition</B> for being a mammal.<BR>
   In other words, <B>if</B> you're a cat <B>then</B> you're a mammal.

   </LI> <LI> Rain is <B>sufficient</B> for asserting that the ground is wet.<BR>
   In other words, <B>if</B> it's raining <B>then</B> the ground is wet.
</LI>  

 </DIV> </UL> 
<HR><HR><A NAME="234"></A><H2>234.  Necessary Conditions </H2><UL>[ <I> Proposition :: Analysis :: Structure :: Compound Proposition :: Conditions ('if...then...') :: Necessary Conditions </I> ]</UL>


<H4>Form</H4><UL>            

   <LI> A is necessary for B.

</LI> </UL> <H4>Semantics</H4><UL>            

   <LI> <B>If not</B> A <B>then not</B> B.

</LI> </UL> <H4>Description</H4>                

   <P> A proposition stating a necessary condition asserts that the consequent (B) cannot be the case without the antecedent (A).

</P>  <H4>Examples</H4><UL>            

<DIV CLASS="GRAYBLOCK">
   <LI> Being a mammal is <B>necessary</B> for being a cat.<BR>
   In other words, <B>if</B> you're <B>not</B> a mammal, you <B>can't be</B> a cat.

   </LI> <LI> The ground being wet is a <B>necessary condition</B> for asserting that it's raining.<BR>
   In other words, <B>if</B> the ground is <B>not</B> wet, it <B>cannot be</B> raining.
</LI>  

 </DIV> </UL> 
<HR><HR><A NAME="235"></A><H2>235.  Necessary and Sufficient Conditions </H2><UL>[ <I> Proposition :: Analysis :: Structure :: Compound Proposition :: Conditions ('if...then...') :: Necessary and Sufficient Conditions </I> ]</UL>


<H4>Form</H4><UL>            

   <LI> A is necessary and sufficient for B.

</LI> </UL> <H4>Semantics</H4><UL>            

   <LI> <B>If</B> A <B>then</B> B, <B>and if</B> B <B>then</B> A.

   </LI> <LI> A <B>if and only if</B> B.

</LI> </UL> <H4>Description</H4>                

   <P> A proposition stating a necessary and sufficient condition is just a different way of stating that if either is true then so is the other; if one is false, then so is the other.  

 </P>  
<HR><HR><A NAME="236"></A><H2>236.  Evaluation </H2><UL>[ <I> Proposition :: Evaluation </I> ]</UL>
<H4>Description</H4>                

   <P> In this section we study the ways in which logicians evaluate propositions.

</P>  <H4>See Also</H4><UL>            

   <LI> definition of <A HREF="#1289" TARGET="baseframe">evaluate</A>  

 </LI> </UL> 
<HR><HR><A NAME="237"></A><H2>237.  Truth Value </H2><UL>[ <I> Proposition :: Evaluation :: Truth Value </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Part of the definition of a proposition is that it is either true or false.  It's <I><B>truth value</B></I> is it's evaluation of 'true' or 'false'.
  

 </P>  
<HR><HR><A NAME="238"></A><H2>238.  Classification </H2><UL>[ <I> Proposition :: Evaluation :: Classification </I> ]</UL>
<H4>Description</H4>                

   <P> Once a proposition, or all propositions in a set have been assigned truth values, it is possible to assign an additional categorical label.  This label is boradly called <B>consistency</B>.  

 </P>  
<HR><HR><A NAME="239"></A><H2>239.  Inconsistent </H2><UL>[ <I> Proposition :: Evaluation :: Classification :: By Truth Value :: Inconsistent </I> ]</UL>


<H4>Alternate Names</H4>                

   <P> Falsity

   </P> <P> Unsatisfiable

</P>  <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A proposition that cannot be true.

   </P> <P> A set of propositions which cannot be true simultaneously.

</P>  <H4>Examples</H4><UL>            

<DIV CLASS="GRAYBLOCK">
   <LI> Mars is a planet <B>and</B> mars is a star.

   </LI> <LI> Mars is a planet.  Mars is a star.

   </LI> <LI> The sun is shining <B>and</B> it is night.
</LI>  

 </DIV> </UL> 
<HR><HR><A NAME="240"></A><H2>240.  Contradiction </H2><UL>[ <I> Proposition :: Evaluation :: Classification :: By Truth Value :: Inconsistent :: Contradiction </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A contradiction is any compound proposition of the form:  P and not-P.

</P>  <H4>Description</H4>                

   <P> A contradiction is a special case of inconsistent propositions because such propositions are inconsistent by form.  It is not necessary to understand the meanings of the two component propositions.  It is possible to make the sweeping statement that all propositions which have this form are false.

</P>  <H4>Examples</H4><UL>            

<DIV CLASS="GRAYBLOCK">
   <LI> Tom is tall <B>AND</B> Tom is not tall.
</LI>

</DIV> </UL> <H4>Notes</H4><UL>            

   <LI> This first example is false because.  Recall the semantics of 'and' as the connective word of a conjunctive proposition.  'and' asserts that both component propositions are true.  In this case, if one is true then the other is false.  Therefore, the entire proposition is false.  

 </LI> </UL> 
<HR><HR><A NAME="241"></A><H2>241.  Consistent </H2><UL>[ <I> Proposition :: Evaluation :: Classification :: By Truth Value :: Consistent </I> ]</UL>


<H4>Alternate Names</H4><UL>            

   <LI> Truth

   </LI> <LI> Satisfiable

</LI> </UL> <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A proposition which is true in at least one case.  A set of propositions which can be true simultaneously.  

 </P>  
<HR><HR><A NAME="242"></A><H2>242.  Tautology </H2><UL>[ <I> Proposition :: Evaluation :: Classification :: By Truth Value :: Consistent :: Tautology </I> ]</UL>


<H4>Alternate Names</H4><UL>            

   <LI> Truth

</LI> </UL> <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A proposition which cannot be false (true in all cases).

</P>  <H4>Examples</H4><UL>            

<DIV CLASS="GRAYBLOCK">
   <LI> Either Mars is a planet <B>or</B> mars is not a planet.
</LI>
  

 </DIV> </UL> 
<HR><HR><A NAME="243"></A><H2>243.  Contingency </H2><UL>[ <I> Proposition :: Evaluation :: Classification :: By Truth Value :: Consistent :: Contingency </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A proposition which is true under some conditions and false under others.

</P>  <H4>Examples</H4><UL>            

<DIV CLASS="GRAYBLOCK">
   <LI> Today there is rain.
</LI>
  

 </DIV> </UL> 
<HR><HR><A NAME="244"></A><H2>244.  By Content </H2><UL>[ <I> Proposition :: Evaluation :: Classification :: By Content </I> ]</UL>
<H4>Description</H4>                

   <P> Propositions may be classified according to what they express.  

 </P>  
<HR><HR><A NAME="245"></A><H2>245.  Fact </H2><UL>[ <I> Proposition :: Evaluation :: Classification :: By Content :: Fact </I> ]</UL>
<H4>Description</H4>                

   <P> A proposition which asserts.  

 </P>  
<HR><HR><A NAME="246"></A><H2>246.  Definition </H2><UL>[ <I> Proposition :: Evaluation :: Classification :: By Content :: Definition </I> ]</UL>
<H4>Description</H4>                

   <P> A proposition which equates a word or short phrase to a longer word or short phrase.  It states that the two are interchangeable without impacting the meaning (or truth value) of the proposition that contains them.  

 </P>  
<HR><HR><A NAME="247"></A><H2>247.  Definitions </H2><UL>[ <I> Proposition :: Definitions </I> ]</UL>
<H4>Overview</H4>                

   <P> Definitions are a particularly well-understood form of proposition.  This section discusses what a definition is, the kinds of definitions and techniques for forming good definitions.  

 </P>  
<HR><HR><A NAME="248"></A><H2>248.  The Definition of </H2><UL>[ <I> Proposition :: Definitions :: The Definition of </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A definition is always a definition of a symbol by one or more other symbols.  (never of/by things or concepts - such as a chair, but of symbols such as 'chair').  

 </P>  
<HR><HR><A NAME="249"></A><H2>249.  definiendum </H2><UL>[ <I> Proposition :: Definitions :: Analysis :: Components :: structural :: definiendum </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The Symbol being defined.  

 </P>  
<HR><HR><A NAME="250"></A><H2>250.  definiens </H2><UL>[ <I> Proposition :: Definitions :: Analysis :: Components :: structural :: definiens </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The symbol or group of symbols being used to explain the meaning of the definiendum.  

 </P>  
<HR><HR><A NAME="251"></A><H2>251.  Scope </H2><UL>[ <I> Proposition :: Definitions :: Analysis :: Components :: Scope </I> ]</UL>
<H4>Overview</H4>                

   <P> A term used to describe the set of objects or concepts to which a definition applies.  

 </P>  
<HR><HR><A NAME="252"></A><H2>252.  Denotation (Extent/Extention) </H2><UL>[ <I> Proposition :: Definitions :: Analysis :: Components :: Scope :: Denotation (Extent/Extention) </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The collection of objects to which a general term correctly applies constitutes the extension of that term.  

 </P>  
<HR><HR><A NAME="253"></A><H2>253.  Connotation (Intent/Intension) </H2><UL>[ <I> Proposition :: Definitions :: Analysis :: Components :: Scope :: Connotation (Intent/Intension) </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The set of attributes shared by all and only those objects to which a general term refers is called the intension of that term.  

 </P>  
<HR><HR><A NAME="254"></A><H2>254.  Ambiguity Reduction </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Purpose :: Ambiguity Reduction </I> ]</UL>
<H4>Properties</H4>                

   <P> These definitions are stated for purposes of eliminating ambiguity in text.  

 </P>  
<HR><HR><A NAME="255"></A><H2>255.  Stipulative </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Purpose :: Ambiguity Reduction :: Stipulative </I> ]</UL>
<H4>Properties</H4>                

   <P> Defining a new word or a new meaning to an existing word intended to extend only the context in which the defining takes place.  A stipulative definition is neither true nor false.  A stipulative is a directive use of language (rather than informative).  

 </P>  
<HR><HR><A NAME="256"></A><H2>256.  Lexical </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Purpose :: Ambiguity Reduction :: Lexical </I> ]</UL>
<H4>Properties</H4>                

   <P> Giving the dictionary definition of a given word.  No new word is defined.  No new definition is created.  A Lexical definition is either true or false.  A lexical definition is always used in the informative use of language (rather than directive).  

 </P>  
<HR><HR><A NAME="257"></A><H2>257.  Precising Definitions (vagueness reducing) </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Purpose :: Precising Definitions (vagueness reducing) </I> ]</UL>
<H4>Properties</H4>                

   <P> These definitions are stated in order to reduce <I>vagueness</I>.  

 </P>  
<HR><HR><A NAME="258"></A><H2>258.  Theoretical Definitions </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Purpose :: Theoretical Definitions </I> ]</UL>
<H4>Properties</H4>                

   <P> Dispute about a definition (usually among scientists, politicians or philosophers).  Ambiguity and/or vagueness are usually not at steak.  They are seeking comprehensive understanding.

</P>  <H4>Examples</H4><UL>            

   <LI> Battle among physicists over the definition of "heat", which continued for generations.  

 </LI> </UL> 
<HR><HR><A NAME="259"></A><H2>259.  Persuasive </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Purpose :: Persuasive </I> ]</UL>
<H4>Properties</H4>                

   <P> Definitions formulated and used persuasively, to resolve disputes by influencing the attitudes, or stirring the emotions, of the target audience.  

 </P>  
<HR><HR><A NAME="260"></A><H2>260.  Denotational (Extensional) </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Form :: Denotational (Extensional) </I> ]</UL>
<H4>Description</H4>                

   <P> Identify the collection of objects to which the general term being defined applies.  

 </P>  
<HR><HR><A NAME="261"></A><H2>261.  Enumerative </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Form :: Denotational (Extensional) :: Enumerative </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Definition by Example

</LI> </UL> <H4>Description</H4>                

   <P> Define a term by providing examples of objects denoted by it.  

 </P>  
<HR><HR><A NAME="262"></A><H2>262.  Ostensive </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Form :: Denotational (Extensional) :: Ostensive </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Demonstrative Definitions

</LI> </UL> <H4>Description</H4>                

   <P> An ostensive definition refers to the examples by means of pointing, or by some other gesture.  

 </P>  
<HR><HR><A NAME="263"></A><H2>263.  quasi-ostensive </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Form :: Denotational (Extensional) :: quasi-ostensive </I> ]</UL>
<H4>Description</H4>                

   <P> Pointing to an object (example) accompanied by some words.

</P>  <H4>Example</H4><UL>            

   <LI> The word 'desk' means this (point to a desk) article of furniture.  

 </LI> </UL> 
<HR><HR><A NAME="264"></A><H2>264.  Recursive-Inductive </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Form :: Denotational (Extensional) :: Recursive-Inductive </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Definition by Subclass

</LI> </UL> <H4>Description</H4>                

   <P> Assigns a meaning to a term by naming subclasses of the class denoted by the term.

</P>  <H4>Examples</H4><UL>            

   <LI> 'Tree' means and oak, pine, elm, spruce, maple and the like.  

 </LI> </UL> 
<HR><HR><A NAME="265"></A><H2>265.  Connotational (Intensional) </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Form :: Connotational (Intensional) </I> ]</UL>
<H4>Description</H4>                

   <P> Describe the attributes of such objects.  

 </P>  
<HR><HR><A NAME="266"></A><H2>266.  Subjective </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Form :: Connotational (Intensional) :: Subjective </I> ]</UL>
<H4>Description</H4>                

   <P> The set of all attriubtes that the speaker believes to be possessed by objects denoted by that word.  

 </P>  
<HR><HR><A NAME="267"></A><H2>267.  Objective </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Form :: Connotational (Intensional) :: Objective </I> ]</UL>
<H4>Description</H4>                

   <P> The total set of characteristics shared by all the objects in the term's extension.  

 </P>  
<HR><HR><A NAME="268"></A><H2>268.  Contextual </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Form :: Connotational (Intensional) :: Contextual </I> ]</UL>
<H4>Description</H4>                  

  
<HR><HR><A NAME="269"></A><H2>269.  Conventional </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Form :: Connotational (Intensional) :: Conventional </I> ]</UL>
<H4>Description</H4>                

   <P> The subset of objective attributes which are commonly agreed to be the definition of an object.  

 </P>  
<HR><HR><A NAME="270"></A><H2>270.  Synonymous Definition </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Form :: Connotational (Intensional) :: Conventional :: Synonymous Definition </I> ]</UL>
<H4>Description</H4>                

   <P> Define a word using another word whose meaning is already understood, that has the same meaning as the word being defined.  

 </P>  
<HR><HR><A NAME="271"></A><H2>271.  Operational Definition </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Form :: Connotational (Intensional) :: Conventional :: Operational Definition </I> ]</UL>
<H4>Description</H4>                

   <P> A definition that states that the term is correctly applied to a given case if and only if the performance of specified operations in that case yields a specified result.  

 </P>  
<HR><HR><A NAME="272"></A><H2>272.  Definition by genus and difference </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Classification :: By Form :: Connotational (Intensional) :: Conventional :: Definition by genus and difference </I> ]</UL>
<H4>Description</H4>                

   <P> Genus is the segmenting of groups into categories (classes).  Difference is what distinguishes it from the rest of the objects in a particular genus.  

 </P>  
<HR><HR><A NAME="273"></A><H2>273.  A definition shall be neither too broad nor too narrow </H2><UL>[ <I> Proposition :: Definitions :: Evaluation :: Requirements :: A definition shall be neither too broad nor too narrow </I> ]</UL>
<H4>Description</H4>                

   <P> A definition delimits a set of concepts or objects.  A definition is too broad if the definition expresses a set larger than that set actually desired (includes more things).  Similarly, a definition is too narrow if it excludes things which are actually in the set.  Thus, by these definitions of broad and narrow, it is possible for a definition to be both too broad and too narrow.  

 </P>  
<HR><HR><A NAME="274"></A><H2>274.  Inference </H2><UL>[ <I> Inference </I> ]</UL>
<H4>Description</H4>                

   <P> The bulk of active study in logic is concerned with the inferential claim.  How do we know that a deductive argument is valid or that an inductive argument is strong?

</P>  <H4>See Also</H4><UL>            

   <LI> Definition of <A HREF="#1282" TARGET="baseframe">Classical Logic</A>.  

 </LI> </UL> 
<HR><HR><A NAME="275"></A><H2>275.  Deduction </H2><UL>[ <I> Inference :: Deduction </I> ]</UL>


<H4>Description</H4>                

   <P> An argument in which it is impossible for the conclusion to be false while the premises are all true.  

 </P>  
<HR><HR><A NAME="276"></A><H2>276.  Proofs </H2><UL>[ <I> Inference :: Deduction :: Proofs </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A noncircular, nonambiguous, valid deductive argument with clearly true premises.  

 </P>  
<HR><HR><A NAME="277"></A><H2>277.  Classification </H2><UL>[ <I> Inference :: Deduction :: Proofs :: Classification </I> ]</UL>
<H4>Description</H4>                

   <P> Proofs may be cateogrized by their purpose.  

 </P>  
<HR><HR><A NAME="278"></A><H2>278.  Proof of Consequence </H2><UL>[ <I> Inference :: Deduction :: Proofs :: Classification :: Proof of Consequence </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A proof of consequence is a proof that demonstrates that some conconclusion &#936; follows necessarilly (validly) from some set of premises &#915;.  This can be notated as:

<UL>
	&#915;  &#8870;  &#936;
</UL>

</P>  <H4>See Also</H4><UL>            

   <LI> Definition of <A HREF="#1298" TARGET="baseframe"> theorem </A>.  

 </LI> </UL> 
<HR><HR><A NAME="279"></A><H2>279.  Proof of Nonconsequence </H2><UL>[ <I> Inference :: Deduction :: Proofs :: Classification :: Proof of Nonconsequence </I> ]</UL>


<H4>Alternate Names</H4><UL>            

   <LI> Refutation Proofs

   </LI> <LI> Proofs of Refutation

   </LI> <LI> (Proof of) Counter Example

</LI> </UL> <H4>Description</H4>                

   <P> A proof constructed to demonstrate that some conclusion &#934; does not follow from some set of Premises &#915;.

<UL>
	&#915;  &#8876;  &#936;
</UL>

</P>  <H4>Elements</H4><OL>            

   <LI> Affirmation of all the argument's premises.

   </LI> <LI> A denial of the argument's conclusion.

   </LI> <LI> An explanation (or example) of how this can be.

</LI> </OL> <H4>Technique</H4><UL>            

   <LI> Prove that it is possible for the conclusion to be false while all the premises are true.  

 </LI> </UL> 
<HR><HR><A NAME="280"></A><H2>280.  Leibniz' Axioms (Proof Method) </H2><UL>[ <I> Inference :: Deduction :: Proofs :: Techniques :: Leibniz' Axioms (Proof Method) </I> ]</UL>


<H4>Description</H4>                

   <P> Leibniz developed a logic based upon four axioms which he then used to prove all the valid inferences in Term Logic.

</P>  <H4>Axioms</H4><OL>            

   <LI> Everything is identical to itself<BR>
   a=a

   </LI> <LI> Law of non-contradiction<BR>
   &#172;(P &#8743; &#172;P)

   </LI> <LI> Law of excluded middle<BR>
   P &#8744; &#172;P

   </LI> <LI> Law of substitution<BR>
   a=b, b=c.  So, a=c  

 </LI> </OL> 
<HR><HR><A NAME="281"></A><H2>281.  Reductio ad absurdum </H2><UL>[ <I> Inference :: Deduction :: Proofs :: Techniques :: Reductio ad absurdum </I> ]</UL>

<H4>Description</H4>                

   <P> Leibniz used his law of non-contradiction and law of excluded middle to create the reductio ad absurdum (indirect proof).  A method of proof where we start with a set of consistent premises, then assert the negation of what we want to prove.  If a contradiction is consequently infered, then the negated conclusion must also be false.  Hence, the non-negated form of the conclusion must be true.

   </P> <P> It is most common to use reducio ad absurdum to introduce an entirely new proposition into a proof.  This is done by introducing a hypothetical argument with a hypothesis (negated form of the proposition).  Then the contradiction must be proved to be able to introduce the desired proposition.  A hypothetical argument is usually introduced by phrases like 'let's suppose' and 'if we suppose'.

</P>  <H4>Examples</H4><UL>            

   <LI> Who stole the cookies?

<PRE>
	If someone can reach the cookies, then he is tall.
	If someone stole the cookies, then he can reach the cookies.
	John is not tall.
		Suppose that, John stole the cookies.
		John can reach the cookies.  (from second premise and supposition)
		John is tall.   (from first premise and previous proposition)
		John is tall AND John is not tall.   (from third premise and previous proposition)
<B>&#8756;</B>	John did not steal the cookies.
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="282"></A><H2>282.  Proof by cases </H2><UL>[ <I> Inference :: Deduction :: Proofs :: Techniques :: Proof by cases </I> ]</UL>
<H4>Description</H4>                

   <P> Given a premise of alternatives ( P v Q v ... ) If we can prove one case at a time (using sub proofs) that each alternative implies some R (that is, prove that (P &#8594; R), and that (Q &#8594; R), etc.), then we can dismiss the premise of alternatives and infer R.
  

 </P>  
<HR><HR><A NAME="283"></A><H2>283.  Two-Column Proofs </H2><UL>[ <I> Inference :: Deduction :: Proofs :: Two-Column Proofs </I> ]</UL>


<H4>Description</H4>                

   <P> A two-column proof is a proof which follows a standard form designed to make the proof easier to follow.  

 </P>  
<HR><HR><A NAME="284"></A><H2>284.  Structure </H2><UL>[ <I> Inference :: Deduction :: Proofs :: Two-Column Proofs :: Structure </I> ]</UL>
<H4>Description</H4>                

   <P> A two-column proof is a sequence of steps, one-per-line, each with the following form:

<PRE>
	<step>	<proposition>	<reason>
</PRE>

   </P> <P> <step> is the step number.  Steps are numbered starting with 1.

   </P> <P> <proposition> is the proposition on that step of the proof.

   </P> <P> <reason> is the reasoning behind why you asserted the proposition at this step of the proof.  If the proposition is one of the original premises you can write 'given'.  If the proposition is a consequence of previous steps, those steps are cited in the reason.  

 </P>  
<HR><HR><A NAME="285"></A><H2>285.  Classical Logics </H2><UL>[ <I> Inference :: Deduction :: Classical Logics </I> ]</UL>
<H4>Description</H4>                

   <P> Formal logics study patterns of reasoning.  In formal deductive logic, a deductive argument is valid if and only if it follows a form already considered valid.

   </P> <P> Classical logics exhibit several characteristics:

</P>  <H4> </H4><UL>            

   <LI> defines a proposition as a statement which makes a claim.

   </LI> <LI> conforms to <A HREF="#280" TARGET="baseframe">Leibniz' Laws</A>.<BR>
   Which consequently permits the use of the <A HREF="#281" TARGET="baseframe">reductio ad absurdum</A> proof technique.

   </LI> <LI> studies one or both of the logical concepts of the truth-functional operators and the grouping of things.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Logics which do not conform to the last criterion (which is widely considered trivial) but extend the language of some strictly classical logic, are often categorized as extension (or extended) classical logics. 

   </LI> <LI> Interestingly, classical logic does not require that the premises be relevant to the conclusion.  In most cases the inferential claim takes care of relevance, but there are exceptions which allow a conclusion from premises irrelevant to it.
  

 </LI> </UL> 
<HR><HR><A NAME="286"></A><H2>286.  Term Logic </H2><UL>[ <I> Inference :: Deduction :: Term Logic </I> ]</UL>


<H4>Alternate Names</H4><UL>            

   <LI> Aristotlean Logic

   </LI> <LI> Categorical Logic

   </LI> <LI> Syllogistic Logic

</LI> </UL> <H4>Description</H4>                

   <P> Term logic studies arguments whose validity depends on 'all', 'no', 'some' and similar notions.  

 </P>  
<HR><HR><A NAME="287"></A><H2>287.  Quantifier </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Lexical Elements :: Quantifier </I> ]</UL>
<H4>Defintion</H4>                

   <P>  A word which states what portion of a category about which a proposition refers.

</P>  <H4>Forms</H4><UL>            

   <LI> The universal quantifier:  "All"

   </LI> <LI> The universal quantifier:  "No"

   </LI> <LI> The particular quantifier:  "Some"

   </LI> <LI> The singular quantifier:  <empty>  

 </LI> </UL> 
<HR><HR><A NAME="288"></A><H2>288.  Copula </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Lexical Elements :: Copula </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The word that seapartes the subject and prediate terms.

</P>  <H4>Forms</H4><UL>            

   <LI> "is" (or "are")

   </LI> <LI> "is not" (or "are not")  

 </LI> </UL> 
<HR><HR><A NAME="289"></A><H2>289.  Term </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Lexical Elements :: Term </I> ]</UL>
<H4>Description</H4>                

   <P> Terms are elements that are not part of the language of term logic.  Rather, they denote objects and cateogries.  

 </P>  
<HR><HR><A NAME="290"></A><H2>290.  Categorical Term </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Lexical Elements :: Term :: Categorical Term </I> ]</UL>
<H4>Description</H4>                

   <P> A categorical term denotes a group of objects.

</P>  <H4>Forms</H4><UL>            

   <LI> It's traditional to signify a categorical term by an upper case letter.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> C, cat

   </LI> <LI> R, religious people

   </LI> <LI> M, males  

 </LI> </UL> 
<HR><HR><A NAME="291"></A><H2>291.  Singular Term </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Lexical Elements :: Term :: Singular Term </I> ]</UL>
<H4>Description</H4>                

   <P> A singular term denotes a particular object.

</P>  <H4>Forms</H4><UL>            

   <LI> It's traditional to signify a categorical term by an lower case letter.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> t, Tom

   </LI> <LI> m, the man standing on the corner.

   </LI> <LI> f, Mary's father  

 </LI> </UL> 
<HR><HR><A NAME="292"></A><H2>292.  Well-Formed Formulas (WFF's) </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Well-Formed Formulas (WFF's) </I> ]</UL>
<H4>Description</H4>                

   <P> Generally, a proposition of Term Logic takes the following syntax:

<PRE>
	quantifier   subject-term   copula   predicate-term
</PRE>

   </P> <P> With the restriction that quantifiers 'all' and 'no' may only be used with copula 'is' (or 'are').

   </P> <P> This results in eight possible proposition forms which are traditionally given names (A, E, I, O, As and Es).

   </P> <P>
   <PRE>
   A:	all A is B.
   E:	no A is B.
   I:	some A is B.
   O:	some A is not B.
   As:	a is b
	a is B
   Es:	a is not b.
	a is not B.
   </PRE>  

 </P>  
<HR><HR><A NAME="293"></A><H2>293.  Lemmon </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Well-Formed Formulas (WFF's) :: Alternate Representations :: Lemmon </I> ]</UL>
<H4>Description</H4>                

   <P> Lemmon ignores singular form propositions.

   </P> <P> Let A,E,I and O represent the four categorical propositions and S, and P represent arbitrary subject and predicate terms.

<PRE>
	affirmative	negative

universal:	A(S,P)		E(S,P)
particular:	I(S,P)		O(S,P)
</PRE>  

 </P>  
<HR><HR><A NAME="294"></A><H2>294.  Algebraic Symbolism </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Well-Formed Formulas (WFF's) :: Alternate Representations :: Algebraic Symbolism </I> ]</UL>
<H4>Description</H4>                

   <P> A Boolean algebraic technique is often used to represent the categorical propositions.

   </P> <P> 0 (zero) is used to represent the empty set.  Propositions are witten as equations being equal to or not equal to the empty set.  These are written using the equal sign and the not-equal sign.  We will use = and != for these respectively.

   </P> <P> Categories will be notated by capital letters.  A complement (everything NOT in a category) will be noted by placing a bar over the category letter (here we will follow the letter by a single quote).  To represent the intersection of two sets (the set of things common to two sets), we place a dot between them.  This dot is often omitted.

   </P> <P> Using S and P to notate the subject and predicate terms.  The four cateogrical forms can then be written as:

<PRE>
A	SP' = 0	
E	SP = 0
I	SP != 0
O	SP' != 0
</PRE>  

 </P>  
<HR><HR><A NAME="295"></A><H2>295.  Formalization Hints </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Formalization Hints </I> ]</UL>
<H4>Description</H4>                

   <P> In common speech, categorical propositions rarely appear in one of the eight standard forms.  The following is a set of 'hints' for translating such propositions into a standard form proposition.  

 </P>  
<HR><HR><A NAME="296"></A><H2>296.  Terms Without Nouns </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Formalization Hints :: Terms Without Nouns </I> ]</UL>
<H4>Descriptions</H4>                

   <P> A categorical proposition must have a noun for each the subject term and the predicate term.  If one is missing, it should be filled in.

</P>  <H4>Examples</H4><UL>            

   <LI> Some roses are red.<BR>
   Becomes:  <B>Some</B> roses <B>are</B> red flowers.

   </LI> <LI> All tigers are carnivorous.<BR>
   Becomes:  <B>All</B> tigers <B>are</B> carnivorous animals.  

 </LI> </UL> 
<HR><HR><A NAME="297"></A><H2>297.  Nonstandard Verbs </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Formalization Hints :: Nonstandard Verbs </I> ]</UL>
<H4>Description</H4>                

   <P> The only copulas permissible in the standard form proposition are 'is' and 'is not'.

   </P> <P> Sometimes the copula is an alternate form of the verb 'to be'.  Othertimes it's a completely different verb.

</P>  <H4>Examples</H4><UL>            

   <LI> Some college students will become educated.<BR>
   Becomes:  <B>Some</B> college students <B>are</B> persons who will become educated.

   </LI> <LI> Some dogs would rather bark than bite.<BR>
   Becomes:  <B>Some</B> dogs <B>are</B> animals that would rather bark than bite.

   </LI> <LI> Some birds fly south during the winter.<BR>
   Becomes:  <B>Some</B> birds <B>are</B> animals that fly south during the winter.

   </LI> <LI> All ducks swim.<BR>
   Becomes:  <B>All</B> ducks <B>are</B> animals that swim.  

 </LI> </UL> 
<HR><HR><A NAME="298"></A><H2>298.  Singular Propositions </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Formalization Hints :: Singular Propositions </I> ]</UL>
<H4>Description</H4>                

   <P> These are named persons, places, things or times.  They may be translated by means of one of the following phrases:

<PRE>
	persons identical to
	places identical to
	things identical to
	cases identical to
	times identical to
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> George went home.<BR>
   Becomes:  <B>All</B> persons identical to George <B>are</B> persons who went home.

   </LI> <LI> Sandra did not go shopping.<BR>
   Becomes:  <B>All</B> persons identical to Sandra <B>are</B> persons who did not go shopping.

   </LI> <LI> I hate gin.<BR>
   Becomes:  <B>All</B> persons who <B>are</B> identical to me are persons who hate gin.  

 </LI> </UL> 
<HR><HR><A NAME="299"></A><H2>299.  Adverbs and Pronouns </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Formalization Hints :: Adverbs and Pronouns </I> ]</UL>
<H4>Description</H4><UL>            

   <LI> Spacial adverbs:  where, wherever, anywhere, everywhere or nowhere; may be translated in terms of 'places'.

   </LI> <LI> Temporal adverbs:  when, whenever, anytime, always or never; may be translated in terms of 'times'.

   </LI> <LI> Pronouns:  who, whoever or anyone; may be translated in terms of 'persons'.

   </LI> <LI> Pronouns:  what, whatever or anything; may be translated in terms of 'things'.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> He always wears a suit to work.<BR>
   Becomes:  <B>All</B> times he goes to work <B>are</B> times he wears a suit.

   </LI> <LI> He is always clean shaven.<BR>
   Becomes:  <B>All</B> times <B>are</B> times he is clean shaven.

   </LI> <LI> She never brings her lunch to school.<BR>
   Becomes:  <B>No</B> times she goes to school <B>are</B> times she brings her lunch.

   </LI> <LI> Nowhere on earth are there any unicorns.<BR>
   Becomes:  <B>No</B> places on earth <B>are</B> places there are unicorns.

   </LI> <LI> Whoever works hard will succeed.<BR>
   Becomes:  <B>All</B> persons who work hard <B>are</B> persons who will succeed.

   </LI> <LI> He glitters when he walks.<BR>
   Becomes:  <B>All</B> times he walks <B>are</B> times he glitters.

   </LI> <LI> She goes where she chooses.<BR>
   Becomes:  <B>All</B> places she chooses to go <B>are</B> places she goes.

   </LI> <LI> She does what she wants.<BR>
   Becomes:  <B>All</B> things she wants to do <B>are</B> things she does.  

 </LI> </UL> 
<HR><HR><A NAME="300"></A><H2>300.  Unexpressed Quantifiers </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Formalization Hints :: Unexpressed Quantifiers </I> ]</UL>
<H4>Description</H4>                

   <P> Many statements have implied quantifiers.

</P>  <H4>Examples</H4><UL>            

   <LI> Emeralds are green gems.<BR>
   Becomes:  <B>All</B> emeralds <B>are</B> green gems.

   </LI> <LI> There are lions in the zoo.<BR>
   Becomes:  <B>Some</B> lions <B>are</B> animals in the zoo.

   </LI> <LI> A tiger is a mammal.<BR>
   Becomes:  <B>All</B> tigers <B>are</B> mammals.

   </LI> <LI> A fish is not a mammal.<BR>
   Becomes:  <B>No</B> fish <B>are</B> mammals.

   </LI> <LI> A tiger roared.<BR>
   Becomes:  <B>Some</B> tigers <B>are</B> animals that roared.

   </LI> <LI> Children are human beings.<BR>
   Becomes:  <B>All</B> children <B>are</B> human beings.

   </LI> <LI> Children live next door.<BR>
   Becomes:  <B>Some</B> childern <B>are</B> persons who live next door.  

 </LI> </UL> 
<HR><HR><A NAME="301"></A><H2>301.  Nonstandard Quantifiers </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Formalization Hints :: Nonstandard Quantifiers </I> ]</UL>
<H4>Description</H4>                

   <P> Often quantifiers other than the standard form quantifiers are used.

</P>  <H4>Examples</H4><UL>            

   <LI> A few soldiers are heroes.<BR>
   Becomes:  <B>Some</B> soldiers <B>are</B> heroes.

   </LI> <LI> Anyone who votes is a citizen.<BR>
   Becomes:  <B>All</B> voters <B>are</B> citizens.

   </LI> <LI> Not everyone who votes is a Democrat.<BR>
   Becomes:  <B>Some</B> voters <B>are not</B> Democrats.

   </LI> <LI> Not a single dog is a cat.<BR>
   Becomes:  <B>No</B> dogs <B>are</B> cats.

   </LI> <LI> All newborns are not able to talk.<BR>
   Becomes:  <B>No</B> newborns <B>are</B> people able to talk.

   </LI> <LI> All prisoners are not violent.<BR>
   Becomes:  <B>Some</B> prisoners <B>are not</B> violent people.

   </LI> <LI> Few sailors entered the regatta.<BR>
   Becomes:  <B>Some</B> sailors <B>are</B> persons who entered the regatta and <B>some</B> sailors <B>are not</B> persons who entered the regatta.  

 </LI> </UL> 
<HR><HR><A NAME="302"></A><H2>302.  Conditional Statements </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Formalization Hints :: Conditional Statements </I> ]</UL>
<H4>Description</H4>                

   <P> When the antecedent and consequent of a conditional statement talk about the same thing, the statement can usually be translated into categorical form.  Such statements are always translated as universals.

</P>  <H4>Examples</H4><UL>            

   <LI> If it's a mouse, then it's a mammal.<BR>
   Becomes:  <B>All</B> mice <B>are</B> mammals.

   </LI> <LI> If it's a turkey, then it's not a mammal.<BR>
   Becomes:  <B>No</B> turkeys <B>are</B> mammals.

   </LI> <LI> If an animal has four legs, then it is not a bird.<BR>
   Becomes:  <B>No</B> four-legged animals <B>are</B> birds.

   </LI> <LI> If Los Angeles is in California, then Los Angeles is a large city.<BR>
   Becomes:  <B>All</B> California cities identical to Los Angeles <B>are</B> large cities.

   </LI> <LI> A person will succeed if he or she perceveres.<BR>
   Becomes:  <B>All</B> persons who percevere <B>are</B> persons who will succeed.

   </LI> <LI> Jewelery is expensive if it is made of gold.<BR>
   Becomes:  <B>All</B> gold jewelery <B>is</B> expensive.

   </LI> <LI> If it's not a mammal, then it's not a mouse.<BR>
   Becomes:  <B>All</B> mice <B>are</B> mammals.

   </LI> <LI> If a company is not well managed, then it is not a good investment.<BR>
   Becomes:  <B>All</B> companies that are good investements <B>are</B> well-managed companies.

   </LI> <LI> Tomatoes are edible unless they are spoiled.<BR>
   Becomes:  <B>All</B> unspoiled tomatoes <B>are</B> edible tomatoes.

   </LI> <LI> Unless a boy misbehaves he will be treated decently.<BR>
   Becomes:  <B>All</B> boys who do not misbehave <B>are</B> boys who will be treated decently.  

 </LI> </UL> 
<HR><HR><A NAME="303"></A><H2>303.  Exclusive Propositions </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Formalization Hints :: Exclusive Propositions </I> ]</UL>
<H4>Description</H4>                

   <P> Many propositions that involve the words 'only', 'none but', 'none except' and 'no...except' are exclusive propositions.  In these cases the predicate is usually phrased first.

</P>  <H4>Examples</H4><UL>            

   <LI> Only elected officials will attend the convention.<BR>
   Becomes:  <B>All</B> persons who will attent the convention <B>are</B> elected officials.

   </LI> <LI> None but the brave deserve the fair.<BR>
   Becomes:  <B>All</B> persons who deserve the fair <B>are</B> brave persons.

   </LI> <LI> No birds except peacocks are proud of their tails.<BR>
   Becomes:  <B>All</B> birds proud of their tails <B>are</B> peacocks.

   </LI> <LI> He owns only blue-chip stocks.<BR>
   Becomes:  <B>All</B> stocks owned by him <B>are</B> blue-chip stocks.

   </LI> <LI> She invited only wealthy socialites.<BR>
   Becomes:  <B>All</B> persons invited by her <B>are</B> wealthy socialites.  

 </LI> </UL> 
<HR><HR><A NAME="304"></A><H2>304.  Exceptive Propositions </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Formalization Hints :: Exceptive Propositions </I> ]</UL>
<H4>Description</H4>                

   <P> Propositions of the form "all except S are P" and "All but S are P".  These must be translated as pairs of conjoined categorical propositions.

</P>  <H4>Examples</H4><UL>            

   <LI> All except students are invited.<BR>
   Becomes:  <B>No</B> students <B>are</B> invited persons, and <B>all</B> nonstudents <B>are</B> invited persons.

   </LI> <LI> All but managers must report to the president.<BR>
   Becomes:  <B>No</B> managers <B>are</B> person who must report to the president, and <B>all</B> non-managers <B>are</B> persons who must report to the president.  

 </LI> </UL> 
<HR><HR><A NAME="305"></A><H2>305.  "The Only" </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Syntax :: Formalization Hints :: "The Only" </I> ]</UL>
<H4>Description</H4>                

   <P> Statements beginning 'the only' are translated differently from those beginning with 'only'.  In this case, the terms are not reversed.

</P>  <H4>Examples</H4><UL>            

   <LI> The only animals that live in this canyon are skunks.<BR>
   Becomes:  <B>All</B> animals that live in this canyon <B>are</B> skunks.

   </LI> <LI> Accountants are the only ones who will be hired.<BR>
   Becomes:  <B>All</B> those who will be hired <B>are</B> accountants.  

 </LI> </UL> 
<HR><HR><A NAME="306"></A><H2>306.  Distribution </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Of Terms :: Distribution </I> ]</UL>
<H4>Description</H4>                

   <P> An attribute of a term.  Distribution says that the term speeks of <I>every</I> member of the class mentioned by the term.  The following are common techniques for finding the distributed terms.  

 </P>  
<HR><HR><A NAME="307"></A><H2>307.  Determination by Form </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Of Terms :: Distribution :: Determination by Form </I> ]</UL>
<H4>Forms & Distribution</H4><UL>            

   <LI> A form propositions - the subject is distributed

   </LI> <LI> E form propositions - the subject & predicate are distributed

   </LI> <LI> I form propositions - nothing is distributed

   </LI> <LI> O form propositions - the predicate is distributed

  

 </LI> </UL> 
<HR><HR><A NAME="308"></A><H2>308.  Determination by Attribute </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Of Terms :: Distribution :: Determination by Attribute </I> ]</UL>
<H4>Attributes & Distribution</H4><UL>            

   <LI> Universal propositions - the subject is distributed

   </LI> <LI> Negative propositions - the predicate is distributed

  

 </LI> </UL> 
<HR><HR><A NAME="309"></A><H2>309.  Determination by Indicators </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Of Terms :: Distribution :: Determination by Indicators </I> ]</UL>
<H4>Indicators & Distribution</H4><UL>            

   <LI> the word all 'all' - the term that follows immediately after

   </LI> <LI> the negatives, 'no' and 'not' - all terms that follow.  

 </LI> </UL> 
<HR><HR><A NAME="310"></A><H2>310.  Quantity </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Of Propositions :: Quantity </I> ]</UL>
<H4>Description</H4>                

   <P> The number of items in the subject term being talked about.

</P>  <H4>Notes</H4><UL>            

   <LI> Singular is just a special case of the Particular.  

 </LI> </UL> 
<HR><HR><A NAME="311"></A><H2>311.  Universal </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Of Propositions :: Quantity :: Universal </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A proposition which refers to all members of the subject term.

   </P> <P> Universal propositions begin with one of the universal quantifiers, 'all' or 'no'.

   </P> <P> All A and E propositions are universal propositions; and vice-versa.  

 </P>  
<HR><HR><A NAME="312"></A><H2>312.  Particular </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Of Propositions :: Quantity :: Particular </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A proposition which refers to at least one members of the subject term.

   </P> <P> Particular propositions begin with the particular quantifier, 'some'

   </P> <P> All I and O propositions are particular propositions; and vice-versa.  

 </P>  
<HR><HR><A NAME="313"></A><H2>313.  Singular </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Of Propositions :: Quantity :: Singular </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A proposition which refers to an individual.

   </P> <P> Particular propositions have no quantifier.

   </P> <P> All As and Es propositions are singular propositions; and vice-versa.  

 </P>  
<HR><HR><A NAME="314"></A><H2>314.  Quality </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Of Propositions :: Quality </I> ]</UL>
<H4>Description</H4>                

   <P> affirmative or negative  

 </P>  
<HR><HR><A NAME="315"></A><H2>315.  Affirmative </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Of Propositions :: Quality :: Affirmative </I> ]</UL>
<H4>Description</H4>                

   <P> A proposition which asserts that something is the case.

   </P> <P> Affirmative universals begin with the quantifier 'all'.  Affirmative particulars and singulars use the copula 'is' (or 'are').

   </P> <P> All A, I and As propositions are affirmatives; and vice-versa.  

 </P>  
<HR><HR><A NAME="316"></A><H2>316.  Negative </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Of Propositions :: Quality :: Negative </I> ]</UL>
<H4>Description</H4>                

   <P> A proposition which denies that something is the case.

   </P> <P> All negative propositions contain the word 'no' or 'not', and vice-versa.

   </P> <P> All E, O and Es propositions are negative; and vice-versa.  

 </P>  
<HR><HR><A NAME="317"></A><H2>317.  Existential Import </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Existential Import </I> ]</UL>
<H4>Description</H4>                

   <P> Existential Import is the concept that a categorical proposition entails the assertion that its terms exist.

</P>  <H4>Examples</H4><UL>            

   <LI> All horses are mammals.<BR>
   Existential import says there are two implied propositions: Horses exist, and Mammals exist.<BR>

   </LI> <LI> Some unicorns are mammals.<BR>
   Existential import says there are two implied propositions:  Unicorns exist. and Mammals exist.<BR>

   </LI> <LI> Some mammals are unicorns.<BR>
   Existential import says there are two implied propositions:  Mammals exist, and Unicorns exist.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Existential import does not mean that the objects do in fact exist, it just <I>asserts</I> that they exist.  This assertion may in fact be false.  

 </LI> </UL> 
<HR><HR><A NAME="318"></A><H2>318.  Traditional Interpretation </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Interpretations :: Traditional Interpretation </I> ]</UL>
<H4>Description</H4>                

   <P> This interpretation is traditionally attributed to Aristotle.  The traditional interpretation says that all four categorical proposition forms have Existential Import.  This interpretation, however, is problematic.  If all terms actually do exist, then we get correct conclusions.  However, if a term doesn't refer to something that actually exists, the conclusion is wrong.

</P>  <H4>Examples</H4>                

   <P> All pheasants are birds.<BR>
   implies the existence of pheasants and birds.

   </P> <P> No pine trees are maples.<BR>
   implies the existence of pine trees and maples.

   </P> <P> All satyrs are vile creatures.<BR>
   Implies the existence of satyrs and vile creatures.  

 </P>  
<HR><HR><A NAME="319"></A><H2>319.  Modern Interpretation </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Interpretations :: Modern Interpretation </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Boolean Interpretation

</LI> </UL> <H4>Description</H4>                

   <P> This is the interpretation of George Boole.  The modern interpretation says that universal categorical proposition forms do not include Existential Import, while the particulars do.  This assumption fixes the problem with the traditional interpretation.

</P>  <H4>Examples</H4><UL>            

   <LI> All students who get 100's on all their tests do not have to take the final.<BR>
   Does not imply that there will be any such students by the end of the term.

   </LI> <LI> All pheasants are birds.<BR>
   Does not imply the existence of pheasants.

   </LI> <LI> No pine trees are maples.<BR>
   Does not imply the existence of pine trees.

   </LI> <LI> All satyrs are vile creatures.<BR>
   Does not imply the existence of satyrs.  

 </LI> </UL> 
<HR><HR><A NAME="320"></A><H2>320.  Aristotle's Interpretation </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Language :: Semantics :: Interpretations :: Aristotle's Interpretation </I> ]</UL>
<H4>Description</H4>                

   <P> According to the Stanford Encyclopedia of Logic (Terence Parsons, http://plato.stanford.edu, Square of Opposition), there is quite a bit of evidence to suggest that Aristotle's interpretation was quite different from that which has been traditionally attributed to him (i.e. the Traditional Interpretation).  Parsons convincingly argues that Aristotle gave existential import to the affirmatives, while the negatives lacked it.

   </P> <P> Only by this interpretation, is it possible to derive the entire Square of Opposition as theorems.  

 </P>  
<HR><HR><A NAME="321"></A><H2>321.  Immediate Inferences </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Immediate Inferences </I> ]</UL>
<H4>Description</H4>                

   <P> A deductive argument consisting of one premise and one conclusion.  

 </P>  
<HR><HR><A NAME="322"></A><H2>322.  Direct </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Immediate Inferences :: Direct </I> ]</UL>
<H4>Description</H4>                

   <P> An immediate inference in which we duduce something directly from the premise.  

 </P>  
<HR><HR><A NAME="323"></A><H2>323.  Contradictories </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Immediate Inferences :: Direct :: Contradictories </I> ]</UL>
<H4>Description</H4>                

   <P> The proposition form pairs (A,O) and (E,I) are said to be contradictories because they necessarily have opposite truth values.  If A, then not O.  If O, then not A.  If not A then O, etc...

</P>  <H4>Conditions</H4><UL>            

   <LI> Traditional Interpretation:  valid<BR>
   Modern Interpretation:  invalid<BR>
   Aristotlean Interpretation:  valid

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> All cats are mammals.<BR>
   <B>&#8756;</B> It is not the case that some cats are not mammals.

   </LI> <LI> No birds are mammals.<BR>
   <B>&#8756;</B> It is false that some birds are mammals.  

 </LI> </UL> 
<HR><HR><A NAME="324"></A><H2>324.  Contraries </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Immediate Inferences :: Direct :: Contraries </I> ]</UL>
<H4>Description</H4>                

   <P> A and E form propositions are said to be contraries, this means that they cannot both be true.  If A then not E.  If E then not A.

</P>  <H4>Conditions</H4><UL>            

   <LI> In all interpretations valid if and only if the subject term really does exist.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> All cats are mammals.<BR>
   <B>&#8756;</B> It is not the case that no cats are mammals.

   </LI> <LI> No birds are mammals.<BR>
   <B>&#8756;</B> It is not the case that all cats are mammals.  

 </LI> </UL> 
<HR><HR><A NAME="325"></A><H2>325.  Subcontraries </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Immediate Inferences :: Direct :: Subcontraries </I> ]</UL>
<H4>Description</H4>                

   <P> I and O form propositions are said to be subcontraries, this means that they cannot both be false.  If not I then O.  If not O then I.

</P>  <H4>Conditions</H4><UL>            

   <LI> In all interpretations valid if and only if the subject term really does exist.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> It is not the case that some birds are mammals.
   <B>&#8756;</B> Some birds are not mammals.

   </LI> <LI> It is not the case that some cats are not mammals.
   <B>&#8756;</B> Some cats are mammals.  

 </LI> </UL> 
<HR><HR><A NAME="326"></A><H2>326.  Subalternations </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Immediate Inferences :: Direct :: Subalternations </I> ]</UL>
<H4>Description</H4>                

   <P> The pairs (A,I) and (E,O) are called subalternations.  If the first is true then so is the second.  That is, the first <I>implies</I> the second.  Thus, if the second is false, so is the first.  If A then I.  If not I then not A, etc.

</P>  <H4>Conditions</H4>                

   <P> In all interpretations valid if and only if the subject term exists really does exist.

</P>  <H4>Examples</H4>                

   <P> All cats are mammals.<BR>
   <B>&#8756;</B> Some cats are mammals.

   </P> <P> No birds are mammals.<BR>
   <B>&#8756;</B> Some birds are not mammals.

   </P> <P> It is not the case that, some birds are mammals.<BR>
   <B>&#8756;</B> It is not the case that all cats are mammals.

   </P> <P> It is not the case that some cats are not mammals.<BR>
   <B>&#8756;</B> It is not the case that no cats are mammals.  

 </P>  
<HR><HR><A NAME="327"></A><H2>327.  Conversion </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Immediate Inferences :: Conversion </I> ]</UL>
<H4>Description</H4>                

   <P> An immediate inference in which we modify some attribute of the premise.  

 </P>  
<HR><HR><A NAME="328"></A><H2>328.  Simple Conversion </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Immediate Inferences :: Conversion :: Simple Conversion </I> ]</UL>
<H4>Description</H4>                

   <P> Swap the subject and predicate terms.  The resulting proposition is called the <I>converse</I>. 

</P>  <H4>Conditions</H4><UL>            

   <LI> Not valid under Aristotle's interpretation.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> E- and I-form propositions are logically equivalent to their converse.

   </LI> <LI> A- and O-form propsotions are logically unrelated to their converse.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> No birds are mammals.<BR>
   <B>&#8756;</B> No mammals are birds.

   </LI> <LI> Some cats are mammals.<BR>
   <B>&#8756;</B> Some mammals are cats.  

 </LI> </UL> 
<HR><HR><A NAME="329"></A><H2>329.  Obversion </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Immediate Inferences :: Conversion :: Obversion </I> ]</UL>
<H4>Description</H4>                

   <P> Change the quality (affirmative/negative) and complement the predicate term.

</P>  <H4>Conditions</H4><UL>            

   <LI> Obversion is not valid under Aristotle's interpretation and was not advocated by him.

   </LI> <LI> Obversion is only conditionally valid under the modern interpretation.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> All categorical proposition are logically equivalent to (and have the same meaning as) their obverse.

</LI> </UL> <H4>Examples</H4>                
	
   <P> A-form:<BR>
   All cats are mammals.<BR>
   <B>&#8756;</B> No cats are non-mammals.

   </P> <P> E-form:<BR>
   No birds are mammals.<BR>
   <B>&#8756;</B> All birds are non-mammals.	

   </P> <P> I-form:<BR>
   Some cats are mammals.
   <B>&#8756;</B> Some cats are not non-mammals.

   </P> <P> O-form:<BR>
   Some birds are not mammals.
   <B>&#8756;</B> Some birds are non-mammals.  

 </P>  
<HR><HR><A NAME="330"></A><H2>330.  Contraposition </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Immediate Inferences :: Conversion :: Contraposition </I> ]</UL>
<H4>Description</H4>                

   <P> Swap and complement the terms.

</P>  <H4>Conditions</H4><UL>            

   <LI> Contraposition is not valid under Aristotle's interpretation and was not advocated by him.

   </LI> <LI> Contraposition is only conditionally valid under the modern interpretation.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> A- and O- form propositions are logically equivalent to (and have the same meaning as) its contraposition.

   </LI> <LI> E- and I- form propositionis are logically unrelated to their contrapositions.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> All cats are mammals.
   <B>&#8756;</B> All non-mammals are non-cats.

   </LI> <LI> Some birds are not mammals.
   <B>&#8756;</B> Some non-mammals are not non-birds.  

 </LI> </UL> 
<HR><HR><A NAME="331"></A><H2>331.  Syllogism </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A syllogism is an argument consisting of exactly three categorical propositions and containing exactly three different terms, each of which appears twice in distinct propositions.

</P>  <H4>Notes</H4><UL>            

   <LI> Not all syllogisms form valid arguments.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> All soldiers are patriots.<BR>
   No traitors are patriots.<BR>
   <B>&#8756;</B> No traitors are soldiers.  

 </LI> </UL> 
<HR><HR><A NAME="332"></A><H2>332.  Standard Form </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Standard Form </I> ]</UL>
<H4>Description</H4>                

   <P> A categorical syllogism is in standard form when the following four conditions are met.

</P>  <H4></H4><UL>            

   <LI> All three statements are standard-form categorical propositions.

   </LI> <LI> The two occurrences of each term are identical.

   </LI> <LI> Each term is used in the same sense throughout the argument.  (This prevents the equivocation fallacy)

   </LI> <LI> One proposition is listed per line in the following order:<BR>
<BR>
	major premise<BR>
	minor premise<BR>
	<B>&#8756;</B> conclusion<BR>  

 </LI> </UL> 
<HR><HR><A NAME="333"></A><H2>333.  Terms </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Analysis :: Components :: Terms </I> ]</UL>
<H4>Description</H4>                

   <P> The three terms of the categorical syllogism are given names depending upon their position in the argument.  

 </P>  
<HR><HR><A NAME="334"></A><H2>334.  Major Term </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Analysis :: Components :: Terms :: Major Term </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The predicate of the conclusion.  

 </P>  
<HR><HR><A NAME="335"></A><H2>335.  Minor Term </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Analysis :: Components :: Terms :: Minor Term </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The subject of the conclusion.  

 </P>  
<HR><HR><A NAME="336"></A><H2>336.  Middle Term </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Analysis :: Components :: Terms :: Middle Term </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The term common to the two premises, but not in the conclusion.  

 </P>  
<HR><HR><A NAME="337"></A><H2>337.  Major Premise </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Analysis :: Components :: Propositions :: Premises :: Major Premise </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The premise containing the major term.  

 </P>  
<HR><HR><A NAME="338"></A><H2>338.  Minor Premise </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Analysis :: Components :: Propositions :: Premises :: Minor Premise </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The premise with the minor term.  

 </P>  
<HR><HR><A NAME="339"></A><H2>339.  Attributes </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Analysis :: Attributes </I> ]</UL>
<H4>Description</H4>                

   <P> The two things used to identify the form of any categorical syllogism.

   </P> <P> The form is identified by naming the mood (Three categorical proposition letters, A, E, I or O), then the figure (a number 1 to 4).

   </P> <P> With 4 kinds of categorical propositions, in three positions. there are 4 x 4 x 4 = 64 possible moods.  64 moods x 4 figures = 256 forms of categorical syllogism.  

 </P>  
<HR><HR><A NAME="340"></A><H2>340.  Mood </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Analysis :: Attributes :: Mood </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The letter names of the propositions that make up the categorical syllogism.  These letters are in the order Major Premise, Minor Premise, Conclusion.  

 </P>  
<HR><HR><A NAME="341"></A><H2>341.  Figure </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Analysis :: Attributes :: Figure </I> ]</UL>
<H4>Description</H4>                

   <P> Figure is determined by the locations of the middle term in the two premises.  Four arrangements are possible.  The actual figure, then, is a number from one to four.

   </P> <P> If we let S be subject, and P be predicate and we list the location of the middle term in the major premise, then the minor premise, we can list the four figures.
<PRE>
SP	1
PP	2
SS	3
PS	4
</PRE>  

 </P>  
<HR><HR><A NAME="342"></A><H2>342.  Validity </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity </I> ]</UL>
<H4>Description</H4>                

   <P> A categorical syllogism in standard form is easily checked for validity by checking the mood and figure against a list of valid forms.

   </P> <P> The actual list of valid forms depends upon the standpoint.  The Boolean (modern) standpoint lists only the 15 unconditionally valid forms.  The tranditional standpoint lists an additional 9 conditionally valid forms.

   </P> <P> Memorization Poem (From the middle ages)
<PRE>
Barbara, Celarent, Darii, Ferio que prioris;
Cesare, Camestres, Festino, Baroco secundae;
Tertia, Darapti, Disamis, Datisi, Felapton,
Bocardo, Ferison habet: quarta insuper addit
Bramantip, Camenes, Dimaris, Fesapo, Fresison.
</PRE>

   </P> <P> The five omitted forms were considered weak because they draw a particular conclusion from premises that would support a (stronger) universal conclusion.  

 </P>  
<HR><HR><A NAME="343"></A><H2>343.  Unconditionally Valid Forms </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Forms :: Unconditionally Valid Forms </I> ]</UL>
<H4>Description</H4>                

   <P> These are the only forms valid under the modern interpretation (They are also valid under the traditional interpretation).  They are listed here with their tranditional names.  

 </P>  
<HR><HR><A NAME="344"></A><H2>344.  First Figure </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Forms :: Unconditionally Valid Forms :: First Figure </I> ]</UL>
<H4>Valid Forms</H4><UL>            

   <LI> AAA-1 :: Barbara

   </LI> <LI> EAE-1 :: Celarent

   </LI> <LI> AII-1 :: Darii

   </LI> <LI> EIO-1 :: Ferio  

 </LI> </UL> 
<HR><HR><A NAME="345"></A><H2>345.  Second Figure </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Forms :: Unconditionally Valid Forms :: Second Figure </I> ]</UL>
<H4>Valid Forms</H4><UL>            

   <LI> EAE-2 :: Cesare

   </LI> <LI> AEE-2 :: Camestres

   </LI> <LI> EIO-2 :: Festino

   </LI> <LI> AOO-2 :: Baroco  

 </LI> </UL> 
<HR><HR><A NAME="346"></A><H2>346.  Third Figure </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Forms :: Unconditionally Valid Forms :: Third Figure </I> ]</UL>
<H4>Valid Forms</H4><UL>            

   <LI> IAI-3 :: Disamis

   </LI> <LI> AII-3 :: Datisi

   </LI> <LI> OAO-3 :: Bocardo

   </LI> <LI> EIO-3 :: Ferison  

 </LI> </UL> 
<HR><HR><A NAME="347"></A><H2>347.  Fourth Figure </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Forms :: Unconditionally Valid Forms :: Fourth Figure </I> ]</UL>
<H4>Valid Forms</H4><UL>            

   <LI> AEE-4 :: Camenes

   </LI> <LI> IAI-4 :: Dimaris

   </LI> <LI> EIO-4 :: Fresison  

 </LI> </UL> 
<HR><HR><A NAME="348"></A><H2>348.  Conditionally Valid Forms </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Forms :: Conditionally Valid Forms </I> ]</UL>
<H4>Description</H4>                

   <P> These forms are only valid under the traditional interpretation, and then only conditionally so.  Thus, each form includes a condition under which it is valid.  

 </P>  
<HR><HR><A NAME="349"></A><H2>349.  First Figure </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Forms :: Conditionally Valid Forms :: First Figure </I> ]</UL>
<H4>Valid Forms</H4><UL>            

   <LI> AAI-1, provided S-term exists

   </LI> <LI> EAO-1, provided S-term exists  

 </LI> </UL> 
<HR><HR><A NAME="350"></A><H2>350.  Second Figure </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Forms :: Conditionally Valid Forms :: Second Figure </I> ]</UL>
<H4>Valid Forms</H4><UL>            

   <LI> AEO-2, provided S-term exists

   </LI> <LI> EAO-2, provided S-term exists  

 </LI> </UL> 
<HR><HR><A NAME="351"></A><H2>351.  Third Figure </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Forms :: Conditionally Valid Forms :: Third Figure </I> ]</UL>
<H4>Valid Forms</H4><UL>            

   <LI> AAI-3, provided M-term exists

   </LI> <LI> EAO-3, provided M-term exists  

 </LI> </UL> 
<HR><HR><A NAME="352"></A><H2>352.  Fourth Figure </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Forms :: Conditionally Valid Forms :: Fourth Figure </I> ]</UL>
<H4>Valid Forms</H4><UL>            

   <LI> AEO-4, provided S-term exists

   </LI> <LI> EAO-4, provided M-term exists

   </LI> <LI> AAI-4, provided P-term exists  

 </LI> </UL> 
<HR><HR><A NAME="353"></A><H2>353.  Relating to Mood </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Rules :: Traditional :: Relating to Mood </I> ]</UL>
<H4>Rules</H4><UL>            

   <LI> Can't have two negaive premises.<BR>
   Related Fallacies:  Exclusive Premises fallacy

   </LI> <LI> negative iff negative<BR>
   Related Fallacies:  Illicit Affirmative/Negative fallacies  

 </LI> </UL> 
<HR><HR><A NAME="354"></A><H2>354.  Relating to Distribution </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Rules :: Traditional :: Relating to Distribution </I> ]</UL>
<H4>Rules</H4><UL>            

   <LI> The middle term must be distributed at least once.<BR>
   Related Fallacies:  Undistributed Middle fallacy

   @  If a term is distributed in the conclusion, then it must be distributed in a premise.<BR>
   Related Fallacies:  Illicit Major and Illicit Minor fallacies  

 </LI> </UL> 
<HR><HR><A NAME="355"></A><H2>355.  Modern </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Rules :: Modern </I> ]</UL>
To test validity under the modern point of view an additional rule is required.  

 
<HR><HR><A NAME="356"></A><H2>356.  Relating to Universals </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Syllogism :: Evaluation :: Validity :: by Rules :: Modern :: Relating to Universals </I> ]</UL>
<H4>Rules</H4><UL>            

   <LI> If both premises are universal, the conclusion cannot be particular.<BR>
   Related Fallacies:  Existential fallacy  

 </LI> </UL> 
<HR><HR><A NAME="357"></A><H2>357.  Enthymeme </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Enthymeme </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> An argument that is expressible as a categorical syllogism but that is missing a premise or a conclusion.  

 </P>  
<HR><HR><A NAME="358"></A><H2>358.  Soritie </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Soritie </I> ]</UL>
<H4>Description</H4>                

   <P> A sorites is a chain of categorical syllogisms in which the intermediate conclusions have been left out.  

 </P>  
<HR><HR><A NAME="359"></A><H2>359.  Square of Oppositions </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Square of Oppositions </I> ]</UL>
<H4>Description</H4>                

   <P> This square of oppositions is a dagramatic way of illustrating the direct immediate inferences of the four standard form categorical propositions.  The forms are arranged in a square, thus:

<PRE>
	A	E

	I	O
</PRE>

   </P> <P> The top row lists the Universals, the bottom lists the Particulars.  The first column is the affirmatives and the second column is the negatives.

   </P> <P> Relations are then represented as lines connecting the four forms.  The particular set of lines depends upon the interpretation used.  

 </P>  
<HR><HR><A NAME="360"></A><H2>360.  Aristotlean & Traditional </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Square of Oppositions :: Aristotlean & Traditional </I> ]</UL>
<H4>Relations</H4>                

   <P> Contradictories<BR>
   An 'X' is drawn through the center of the square connecting the pairs (A,O) and (E,I) and labled 'contradictories'.

   </P> <P> Contraries<BR>
   A line is drawn between A and E and labled 'contraries'

   </P> <P> Subcontraries<BR>
   A line is drawn between I and O and labled 'subcontraries'

   </P> <P> Subalternations<BR>
   An arrow is drawn from A to I and from E to O and labled subalternation.  

 </P>  
<HR><HR><A NAME="361"></A><H2>361.  Modern </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Square of Oppositions :: Modern </I> ]</UL>
<H4>Relations</H4>                

   <P> Contradictories<BR>
   An 'X' is drawn through the center of the square connecting the pairs (A,O) and (E,I) and labled 'contradictories'.

   </P> <P> No other relations are (formally) valid under the modern interpretation.  

 </P>  
<HR><HR><A NAME="362"></A><H2>362.  Venn Diagrams </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Venn Diagrams </I> ]</UL>
<H4>Description</H4>                

   <P> Venn Diagrams are a diagramatic tool for working with categorical propositions and arguments.

   </P> <P> In a Venn diagram, a circle represents a categorical term, an 'x' represents a term that exists, and blocking-out sections of circles represents the case that nothing exists in that category.  

 </P>  
<HR><HR><A NAME="363"></A><H2>363.  Propositions </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Venn Diagrams :: Propositions </I> ]</UL>
<H4>Description</H4>                

   <P> The Venn Diagram for a categorical proposition consists of two overlapping circles.  The circle on the left represents the subject term, the one on the right represents the predicate term.  This creates four spaces.  The area only in the subject circle; the area enclosed by the overlapping circles (representing the union of both subject and predicate); the area only in the predicate circle and the area completely outside both circles.

   </P> <P> Each of the four propositional forms is represented placing a mark in one of the three areas by either block out or placing an 'X' in one of the four areas.  Blocking out an area means that the proposition does not refer to any objects in that area.  An 'X' is placed in an area to indicate that the proposition refers to at least one object in that area.

<PRE>
   If we refer to the four areas as:

      S, subject only
      P, predicate only
      SP, subject/predicate area
      O, outside

Then the diagram for each proposition is:

A-form,  block out S.
E-form,  block out SP.
I-form,  x in SP.
O-form, x in S.
</PRE>  

 </P>  
<HR><HR><A NAME="364"></A><H2>364.  Aristotlean & Traditional Interpretation </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Venn Diagrams :: Immediate Inferences :: Direct :: Aristotlean & Traditional Interpretation </I> ]</UL>
<H4>Description</H4>                

   <P> Since the Aristotelian standpoint takes the view that universals imply the existence of things, these propositions include both blocking something out and placing an 'X' somewhere.  To distinguish from the Boolean point of view, this 'X' is drawn with a circle around it.

</P>  <H4>Forms & Venn Diagrams</H4><UL>            

   <LI> A :: Block out the subject-only area, 'X' in the union area.

   </LI> <LI> E :: Block out the union area, 'X' in the subject-only area..

   </LI> <LI> I :: 'X' in the union area.

   </LI> <LI> O :: 'X' in the subject-only area.  

 </LI> </UL> 
<HR><HR><A NAME="365"></A><H2>365.  Modern Interpretation </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Venn Diagrams :: Immediate Inferences :: Direct :: Modern Interpretation </I> ]</UL>
<H4>Description</H4>                

   <P> Since Boolean interpretation of the universals does not assume that anything exists, no 'X' is drawn for them.

</P>  <H4>Forms & Diagrams</H4><UL>            

   <LI> A :: Block out the subject-only area.

   </LI> <LI> E :: Block out the union area.

   </LI> <LI> I :: 'X' in the union area.

   </LI> <LI> O :: 'X' in the subject-only area.  

 </LI> </UL> 
<HR><HR><A NAME="366"></A><H2>366.  Simple Conversion </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Venn Diagrams :: Immediate Inferences :: Conversion :: Simple Conversion </I> ]</UL>
<H4>Description</H4>                

   <P> If the subject-only or predicate-only portion of the venn diagram has a mark in it for the original proposition (A or O propositions), then the converse results in a proposition such that the mark is moved to the other -only portion.  So, a subject-only mark becomes a predicate-only mark and vice-versa.

   </P> <P> Converse diagrams for propositions where Venn diagrams mark the union area (E or I propositions) result in new diagrams identical to the original.  

 </P>  
<HR><HR><A NAME="367"></A><H2>367.  Obversion </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Venn Diagrams :: Immediate Inferences :: Conversion :: Obversion </I> ]</UL>
<H4>Description</H4>                

   <P> The Venn diagram for the obverse of any categorical proposition is identical to the Venn diagram for the original  proposition.  

 </P>  
<HR><HR><A NAME="368"></A><H2>368.  Contraposition </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Venn Diagrams :: Immediate Inferences :: Conversion :: Contraposition </I> ]</UL>
<H4>Description</H4>                

   <P> The Venn diagram for the contraposition of an A or O categorical proposition is the same as that for the original categorical proposition.

   </P> <P> For E and I categorical propositions, the mark in the union-are of the diagram is moved outside both circles for the contraposition.  

 </P>  
<HR><HR><A NAME="369"></A><H2>369.  Syllogisms </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Venn Diagrams :: Syllogisms </I> ]</UL>
<H4>Description</H4>                

   <P> Similar to a venn diagram for a categorical proposition.  We start out with the two overlapping circles representing the subject and predicate terms of the conclusion.  Then a thrid overlapping circle is drawn to represent the middle term.  This circle is drawn slightly above and to the center of the other two to form a sort of triangle of circles.  The new circle should then have four segments.  A middle-term-only segment, a middle-term/subject segment.  A middle-term/predicate segment.  And a middle-term/subje/predicate segment.  Similarly, each of the other two circles will now have four segments.  This gives a total of seven segments.  We can denote these segments as follows:

<PRE>
	S,	subject only
	P,	preciate only
	M,	middle term only
	SP,	subject/predicate
	SM,	subject/middle term
	PM,	predicate/middle term
	SPM,	subject/predicate/middle terem
</PRE>  

 </P>  
<HR><HR><A NAME="370"></A><H2>370.  Marking Hints </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Venn Diagrams :: Syllogisms :: Marking Hints </I> ]</UL>
<H4>Hints</H4><OL>            

   <LI> Marks should be entered only for the premises.

   </LI> <LI> Make the marks for universal premises first.

   </LI> <LI> When entering the information contained in a premise, one should concentrate on the circles corresponding to the two terms in the statement.  While the third circle cannot be ignored altogether, it should be given only minimal attention.

   </LI> <LI> When inspecting a completed diagram to see if it supports a particular conclusion, remember that particular statements assert two things.  That the subject exists and that it is/is not contained in the predicate set.

   </LI> <LI> When shading an area, one must be careful to shade all of the area in question.

   </LI> <LI> The area where an 'X' goes is always initially divided into two parts.  If one of these parts has already been shaded, the 'X' goes in the unshaded part.  If one of the two parts is not shaded, the 'X' goes on the line separating the two parts.

   </LI> <LI> An 'X' should never be placed in such a way that it dangles outside of the diagram, and it should never be placed on theintersection of two lines.  

 </LI> </OL> 
<HR><HR><A NAME="371"></A><H2>371.  Traditional Interpretation </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Venn Diagrams :: Syllogisms :: Traditional Interpretation </I> ]</UL>
<H4>Hints</H4><OL>            

   <LI> Reduce the syllogism to its form and test it from the Modern Interpretation.  If the form is valid, proceed no further.  The syllogism is unconditionally valid.

   </LI> <LI> If the syllogistic form is invalid from the Modern Interpretation and there is a Venn circle that is completely staded except for one area, adopt the Aristotelian standpoint and enter a circled 'X' in the unshaded part of that circle.  Retest the form.

   </LI> <LI> If the syllogistic form is conditionally valid, determine if the circled 'X' represents something that exists.  If it does, the condition is fulfilled, and the syllogism is valid from the Aristotelian standpoint.  

 </LI> </OL> 
<HR><HR><A NAME="372"></A><H2>372.  Modern Interpretation </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Venn Diagrams :: Syllogisms :: Modern Interpretation </I> ]</UL>
<H4>Description</H4>                

   <P> Interpretation from the modern (boolean) standpoint is straight-foreward.  Just read it from the Venn diagram.  

 </P>  
<HR><HR><A NAME="373"></A><H2>373.  Star Test (Gensler) </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Star Test (Gensler) </I> ]</UL>
<H4>Description</H4>                

   <P> Gensler's method is concerned only with the modern interpretation.  

 </P>  
<HR><HR><A NAME="374"></A><H2>374.  The Star Test for Validity </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Star Test (Gensler) :: The Star Test for Validity </I> ]</UL>
<H4>Test</H4><OL>            

   <LI> For every premise, star each distributed categorical term (non-singular terms).

   </LI> <LI> Star each undistributed term in the conclusion.

</LI> </OL> <H4>Analysis</H4>                

   <P> The argument is valid if it satisfies each of the following:

</P>  <H4></H4><OL>            

   <LI> Each categorical term is stared exactly once.

   </LI> <LI> Only one predicate term is stared.  All other stars are subject terms.

</LI> </OL> <H4>Notes</H4><UL>            

   <LI> This works for every argument from one to n premises, including arguments with singular terms  

 </LI> </UL> 
<HR><HR><A NAME="375"></A><H2>375.  Deriving Conclusions </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Star Test (Gensler) :: Deriving Conclusions </I> ]</UL>
<H4>Description</H4>                

   <P> Given a set of premises, derive the appropriate conclusion.  

 </P>  
<HR><HR><A NAME="376"></A><H2>376.  Translate the Premises </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Star Test (Gensler) :: Deriving Conclusions :: Translate the Premises </I> ]</UL>
<H4>Description</H4>                

   <P> Translate the premises into logic.  For simplicity, begin the star test.

   </P> <P> Check to see if any rules of the star test are broken, if so, you can't continue.

</P>  <H4>Possible Errors</H4><UL>            

   <LI> Two predicates are stared.

   </LI> <LI> If there is a distributed (stared) term that occurs more than once, it should be starred at exactly once.  

 </LI> </UL> 
<HR><HR><A NAME="377"></A><H2>377.  Determine the Terms of the conclusion </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Star Test (Gensler) :: Deriving Conclusions :: Determine the Terms of the conclusion </I> ]</UL>
<H4>Description</H4>                

   <P> Figure out which terms will occur in the conclusion.  These will be the two terms that occur just once in the premises.  

 </P>  
<HR><HR><A NAME="378"></A><H2>378.  Determine the form of the conclusion </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Star Test (Gensler) :: Deriving Conclusions :: Determine the form of the conclusion </I> ]</UL>
<H4>Description</H4>                

   <P> Figure out which form the conclusion will take.

</P>  <H4>Rules</H4><UL>            

   <LI> If every premise has 'all', the conclusion takes 'all'.

   </LI> <LI> If the premises are a mix of 'all' and 'no', the conclusion has 'no'.

   </LI> <LI> If any premise has 'some', the conclusion has 'some'.

   </LI> <LI> And if any premise has 'no' or 'not', then the conclusion has 'is not'.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> In some cases, one or both of the conclusion-letters will be small.  then the conclusion will have a small letter, 'is' or 'is not', and then the other letter.

   </LI> <LI> If any premise has 'no' or 'not, the conclusion is 'x is not A' or 'x is not y'
   Otherwise, the conclusion is 'x is A' or 'x is y'  

 </LI> </UL> 
<HR><HR><A NAME="379"></A><H2>379.  Test for validity </H2><UL>[ <I> Inference :: Deduction :: Term Logic :: Inference Theories :: Star Test (Gensler) :: Deriving Conclusions :: Test for validity </I> ]</UL>
<H4>Description</H4>                

   <P> Add the conclusion to the propositions and test the argument form validity.  

 </P>  
<HR><HR><A NAME="380"></A><H2>380.  Modern Inference Theories </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Modern Inference Theories </I> ]</UL>
<H4>See Also</H4>                

   <P> <A HREF="#1281" TARGET="baseframe">calculus</A>  

 </P>  
<HR><HR><A NAME="381"></A><H2>381.  Axiomatic </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Modern Inference Theories :: Axiomatic </I> ]</UL>
<H4>Description</H4>                

   <P> An axiomatic system of inference is one in which there is one or more formal propositions (called axioms) which are part of the calculus.  The calculus also has "inference rules", rules governing how premises and axiom may be inserted into the proof and how to infer new propositions.

   </P> <P> Axiomatic systems are not commonly used today because it's difficult to see how to get to the conclusion from a set of premises and the intermediate inferences tend to derive very long and cumbersome propositions.  The advantages of the axiomatic calculi are that there tends to be relatively few inference rules and axioms.  The greater difficulty in deriving the proofs is what makes them far less desirable than other systems.  

 </P>  
<HR><HR><A NAME="382"></A><H2>382.  Natural Deduction </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Modern Inference Theories :: Natural Deduction </I> ]</UL>
<H4>Description</H4>                

   <P> Natural deduction, so called because it more closely models human thought process than axiomatic calculi, eliminates the axioms and introduces additional inference rules.  Typically, the inference theories presented in this outline are of this variety.  

 </P>  
<HR><HR><A NAME="383"></A><H2>383.  Propositional Logic (PL) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) </I> ]</UL>


<H4>Alternate Names</H4><UL>            

   <LI> Truth-Functional Logic

   </LI> <LI> Tautological Logic

   </LI> <LI> Sentential Logic

</LI> </UL> <H4>Description</H4>                

   <P> Propositional Logic studies arguments whose validity depends upon <B>negation</B> ('not') and the connectives of <B>compound propositions</B> ('and', 'or', etc.)  

 </P>  
<HR><HR><A NAME="384"></A><H2>384.  Proposition Symbol </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Lexical Elements :: Proposition Symbol </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Sentence Symbol

   </LI> <LI> Proposition Letter

</LI> </UL> <H4>Notation</H4><UL>            

   <LI> <capital letter><BR>
   Where <capital letter> represents a single proposition.

</LI> </UL> <H4>Alternate Notations</H4><UL>            

   <LI> <lower case letter>

   </LI> <LI> <name>
   where <name> is a sequence of characters.  Usually, the first character is a capital letter.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI>
<PRE>
	M,	Today is Monday.
	T,	Al is tall and fat.
	P,	X &#8594; Y
	B,	The sky is blue
</PRE>}  

 </LI> </UL> 
<HR><HR><A NAME="385"></A><H2>385.  Truth-Functional Operators </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Lexical Elements :: Truth-Functional Operators </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Logical Operators

   </LI> <LI> <A HREF="#1296" TARGET="baseframe">Syncategorematic</A> Terms

</LI> </UL> <H4>Description</H4>                

   <P> <B>Negation</B> and the various connectives used to form <B>compound propositions</B>.  The truth value of the resulting compound proposition is determined only by the truth values of the joined propositions.

</P>  <H4>Notes</H4><UL>            

   <LI> The truth-functional operators are often just called <B>operators</B>.  The component propositions used with a truth-functional operator are often called <B>operands</B>.  

 </LI> </UL> 
<HR><HR><A NAME="386"></A><H2>386.  Negation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Lexical Elements :: Truth-Functional Operators :: Negation </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> &#172;P<BR>
   Where P is some Proposition Symbol.

</LI> </UL> <H4>Alternate Notations</H4><UL>            

   <LI> ~P

   </LI> <LI> -P

   </LI> <LI> P'

   </LI> <LI> P-bar (P with a bar over the top)

</LI> </UL> <H4>Translation</H4>                

   <P> "It is not the case that P."
  

 </P>  
<HR><HR><A NAME="387"></A><H2>387.  Conjunction </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Lexical Elements :: Truth-Functional Operators :: Conjunction </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> P &#8743; Q<BR>
   Where P and Q are Proposition Symbols.

</LI> </UL> <H4>Alternate Notations</H4><UL>            

   <LI> P * Q

   </LI> <LI> P & Q

   </LI> <LI> PQ

</LI> </UL> <H4>Translation</H4>                

   <P> "P and Q."

</P>  <H4>Examples</H4><UL>            

   <LI> The sky is blue <B>and</B> the grass is green.<BR>
   B  &#8743;  G

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Beware using 'and' incorrectly.

   </LI> <LI> The operands of a conjunction are sometimes called <B>conjuncts</B>.

<PRE>
      Max went home and Claire went to sleep.

      This example has temporal implication not expressible via &#8743;.
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="388"></A><H2>388.  Disjunction </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Lexical Elements :: Truth-Functional Operators :: Disjunction </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> P &#8744; Q<BR>
   Where, P and Q are Proposition Symbols.

</LI> </UL> <H4>Alternate Notations</H4><UL>            

   <LI> P + Q

</LI> </UL> <H4>Translation</H4>                

   <P> "P or Q."

</P>  <H4>Examples</H4><UL>            

   <LI> Either Peter is at home <B>or</B> he is at school.<BR>
   H  &#8744;  S

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> The operands of a disjunction are sometimes called <B>disjuncts</B>.

   </LI> <LI> The &#8744; notation comes from the first word of the latin word for or <I>vel</I>.  Latin acutally has two words for <I>or</I>, <I>aut</I> and <I>vel</I>.  <I>aut</I> has the meaning of <I>exclusive</I> or, A or B but not both; while <I>vel</I> has the meaning of <I>inclusive</I> or, A or B or both, which is precisely the meaning of &#8744; in logic.  

 </LI> </UL> 
<HR><HR><A NAME="389"></A><H2>389.  Conditional </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Lexical Elements :: Truth-Functional Operators :: Conditional </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Implication

</LI> </UL> <H4>Notation</H4><UL>            

   <LI> P &#8594; Q<BR>
   Where P and Q are Proposition Symbols.

</LI> </UL> <H4>Alternate Notations</H4><UL>            

   <LI> P &#8835; Q

</LI> </UL> <H4>Translation</H4>                

   <P> "if P then Q", "P implies Q"

</P>  <H4>Examples</H4><UL>            

   <LI> <B>If</B> it rains <B>then</B> the ground is wet.<BR>
   R  &#8594;  W

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> The operands of a conditional are sometimes called <B>implicants</B>.

   </LI> <LI> Since the ordering of the implicants is important for a conditional, the implicant which precedes the conditional symbol is called the 'antecedent', while that which follows the conditional symbol is called the 'consequent'.  

 </LI> </UL> 
<HR><HR><A NAME="390"></A><H2>390.  Biconditional </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Lexical Elements :: Truth-Functional Operators :: Biconditional </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> bi-implication

   </LI> <LI> equivalence

</LI> </UL> <H4>Notation</H4><UL>            

   <LI> P &#8596; Q

</LI> </UL> <H4>Alternate Notations</H4><UL>            

   <LI> P trippleEqualsSymbol Q

</LI> </UL> <H4>Translation</H4>                

   <P> "P if and only if Q"

</P>  <H4>Examples</H4><UL>            

   <LI> There is a train arriving at the station if and only if the whistle blows.<BR>
   A  &#8596;  W  

 </LI> </UL> 
<HR><HR><A NAME="391"></A><H2>391.  Parenthesis </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Lexical Elements :: Parenthesis </I> ]</UL>
<H4>Description</H4>                

   <P> Parenthesis, ( and ), are used for grouping.

</P>  <H4>Alternate Notations</H4><UL>            

   <LI> [ and ]

   </LI> <LI> { and }  

 </LI> </UL> 
<HR><HR><A NAME="392"></A><H2>392.  Well-Formed Formulas (WFFs) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Well-Formed Formulas (WFFs) </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A well formed formula (wff) is a syntactially correct expression of Propositional Logic.  

 </P>  
<HR><HR><A NAME="393"></A><H2>393.  Formation Rules </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Well-Formed Formulas (WFFs) :: Formation Rules </I> ]</UL>


<H4>WFF</H4><OL>            
   
   <LI> Any Proposition Symbol is a wff.

   </LI> <LI> If &#934; is a wff, so is &#172;&#934;.

   </LI> <LI> If &#934; and &#936; are wffs, so are (&#934; &#8743; &#936;), (&#934; &#8744; &#936;), (&#934; &#8594; &#936;) and (&#934; &#8596; &#936;).

</LI> </OL> <H4>Notes</H4><UL>            

   <LI> We use Greek letters  (which belong to the metalanguage).  They are variables that stand for formulas of propositional logic.  The Greek indicates generality.  By metalanguage, we mean the system used for talking <I>about</I> logic.

   </LI> <LI> It's common to omitt the outermost set of parenthesis from wffs to facilitate readability.  
<PRE>
   (P &#8743; Q)    is often written    P &#8743; Q
</PRE>

   </LI> <LI> When having a series of conjuncts or disjuncts, is it common to leave out the parenthesis.
<PRE>
   (A &#8743; (B &#8743; (C &#8743; D)))  is often written  A &#8743; B &#8743; C &#8743; D
   (A &#8744; (B &#8744; (C &#8744; D)))  is often written  A &#8744; B &#8744; C &#8744; D
</PRE>
   In fact, the arrangement of the parenthesis pairs is completely irrelevant provided the wff contains only  conjuntions or only disjunctions.

   </LI> <LI> Be aware, if using these shortened forms, that the form without the parentheses is <B>not</B> a valid wff.  

 </LI> </UL> 
<HR><HR><A NAME="394"></A><H2>394.  "non-" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Formalization Hints :: "non-" </I> ]</UL>
<H4>Hint</H4>                

   <P> Traslate as negation.  

 </P>  
<HR><HR><A NAME="395"></A><H2>395.  "but", "yet", "however", and "although" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Formalization Hints :: "but", "yet", "however", and "although" </I> ]</UL>
<H4>Hint</H4>                

   <P> Translate these with a conjunction.  

 </P>  
<HR><HR><A NAME="396"></A><H2>396.  "unless" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Formalization Hints :: "unless" </I> ]</UL>
<H4>Hint</H4>                

   <P> Translate these with a disjunction.  

 </P>  
<HR><HR><A NAME="397"></A><H2>397.  "provided that", "assuming that" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Formalization Hints :: "provided that", "assuming that" </I> ]</UL>
<H4>Description</H4>                

   <P> Swap the terms around "provided that" (or "assuming that") and translate using &#8594;.  

 </P>  
<HR><HR><A NAME="398"></A><H2>398.  "only if" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Formalization Hints :: "only if" </I> ]</UL>
<H4>Hint</H4>                

   <P> Translate as conditional.  

 </P>  
<HR><HR><A NAME="399"></A><H2>399.  "Just if" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Formalization Hints :: "Just if" </I> ]</UL>
<H4>Hint</H4>                

   <P> Translate these with bi-implication.  

 </P>  
<HR><HR><A NAME="400"></A><H2>400.  "necessary" and "sufficient" conditions. </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Syntax :: Formalization Hints :: "necessary" and "sufficient" conditions. </I> ]</UL>
<H4>Hint</H4>                

   <P> "A is sufficient for B"  translates as  "A &#8594; B"

   </P> <P> "A is necessary for B"  translates as  "&#172;A &#8594; &#172;B"

   </P> <P> "A is necessary and sufficient for B"  translates as  "A &#8596; B"  

 </P>  
<HR><HR><A NAME="401"></A><H2>401.  Semantics </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics </I> ]</UL>
<H4>Description</H4>                

   <P> Classical propositional logic is founded on the concept that the semantics of the truth functional operators is the way that those operators manipulate the truth values of their operands.  

 </P>  
<HR><HR><A NAME="402"></A><H2>402.  Valuation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Valuation </I> ]</UL>


<H4>Alternate Names</H4><UL>            

   <LI> Truth Value

</LI> </UL> <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A valuation of a formula or set of formulas of propositional logic is an assignment of one and only one of the truth values T and F to each of the sentence letters occurring in that formula or in any formula of that set.

   </P> <P> A valuation function <I><B>V</B></I>( &#929; ) then, is a function which evaluates the proposition &#929; and returns its truth value.  

 </P>  
<HR><HR><A NAME="403"></A><H2>403.  Negation ( &#172; &#934; ) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Valuation :: Negation ( &#172; &#934; ) </I> ]</UL>


<H4>Valuation</H4><OL>            

   <LI> <I><B>V</B></I>( &#172;&#934; ) = T  iff  <I><B>V</B></I>( &#934; ) &#8797; T.

   </LI> <LI> <I><B>V</B></I>( &#172;&#934; ) = T  iff  <I><B>V</B></I>( &#934; ) = T.  

 </LI> </OL> 
<HR><HR><A NAME="404"></A><H2>404.  Conjunction ( &#934; &#8743; &#936; ) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Valuation :: Conjunction ( &#934; &#8743; &#936; ) </I> ]</UL>


<H4>Valuation</H4><OL>            

   <LI> <I><B>V</B></I>( &#934; &#8743; &#936; ) = T iff both <I><B>V</B></I>( &#934; ) = T and <I><B>V</B></I>( &#936; ) = T.

   </LI> <LI> <I><B>V</B></I>( &#934; &#8743; &#936; ) = F iff either <I><B>V</B></I>( &#934; ) &#8797; T or <I><B>V</B></I>( &#936; ) &#8797; T, or both.
  

 </LI> </OL> 
<HR><HR><A NAME="405"></A><H2>405.  Disjunction ( &#934; &#8744; &#936; ) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Valuation :: Disjunction ( &#934; &#8744; &#936; ) </I> ]</UL>


<H4>Valuation</H4><OL>            

   <LI> <I><B>V</B></I>( &#934; &#8744; &#936; ) = T iff either <I><B>V</B></I>( &#934; ) = T or <I><B>V</B></I>( &#936; ) = T, or both.

   </LI> <LI> .VAL( &#934; &#8744; &#936; ) = F iff both <I><B>V</B></I>( &#934; ) &#8797; T and <I><B>V</B></I>( &#936; ) &#8797; T.  

 </LI> </OL> 
<HR><HR><A NAME="406"></A><H2>406.  Conditional ( &#934; &#8594; &#936; ) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Valuation :: Conditional ( &#934; &#8594; &#936; ) </I> ]</UL>


<H4>Valuation</H4><OL>            

   <LI> <I><B>V</B></I>( &#934; &#8594; &#936; ) = T iff either .VAL( &#934; ) &#8797; T or <I><B>V</B></I>( &#936; ) = T, or both.

   </LI> <LI> <I><B>V</B></I>( &#934; &#8594; &#936; ) = F iff both <I><B>V</B></I>( &#934; ) = T and <I><B>V</B></I>( &#936; ) &#8797; T.  

 </LI> </OL> 
<HR><HR><A NAME="407"></A><H2>407.  Biconditional ( &#934; &#8594; &#936; ) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Valuation :: Biconditional ( &#934; &#8594; &#936; ) </I> ]</UL>


<H4>Valuation</H4><OL>            

   <LI> <I><B>V</B></I>( &#934; &#8596; &#936; ) = T iff either <I><B>V</B></I>( &#934; ) = T and <I><B>V</B></I>( &#936; ) = T, or <I><B>V</B></I>( &#934; ) &#8797; T and <I><B>V</B></I>( &#936; ) &#8797; T.

   </LI> <LI> <I><B>V</B></I>( &#934; &#8596; &#936; ) = F iff either <I><B>V</B></I>( &#934; ) = T and <I><B>V</B></I>( &#936; ) &#8797; T, or <I><B>V</B></I>( &#934; ) &#8797; T and <I><B>V</B></I>( &#936; ) = T.
     

 </LI> </OL> 
<HR><HR><A NAME="408"></A><H2>408.  Logical Equivalence </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Semantics of WFFs :: Logical Equivalence </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Two formulas are logically equivalent iff they have the same truth value on every valuation of both.  

 </P>  
<HR><HR><A NAME="409"></A><H2>409.  inconsistent </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Semantics of WFFs :: inconsistent </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A formula is inconsistent iff there is no valuation on which it is true.

   </P> <P> A set of formulas is inconsistent iff there is not valuation on which all the formulas in the set are true.  

 </P>  
<HR><HR><A NAME="410"></A><H2>410.  Contradiction </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Semantics of WFFs :: inconsistent :: Contradiction </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A Truth-Functional WFF of the form P &#8743; &#172;P.  It is common to use the symbol &#8869; to stand for 'contradiction'.  

 </P>  
<HR><HR><A NAME="411"></A><H2>411.  Consistent </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Semantics of WFFs :: Consistent </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

A set of one or more propositions is consistent if there is at least one one valuation (line in the truth-table) on which all propositions in the set are true.  

  
<HR><HR><A NAME="412"></A><H2>412.  Valid </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Semantics of WFFs :: Consistent :: Valid </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A valid wff or proposition is a proposition true on all of its valuations.  

 </P>  
<HR><HR><A NAME="413"></A><H2>413.  Tautology </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Semantics of WFFs :: Consistent :: Valid :: Tautology </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A tautology is a wff of Propositional Logic that's true on all of its valuations.  

 </P>  
<HR><HR><A NAME="414"></A><H2>414.  Equivalence </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Semantics of WFFs :: Consistent :: Valid :: Tautology :: Equivalence </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A kind of tautology of the form P &#8596; Q.

</P>  <H4>Notes</H4><UL>            

   <LI> P and Q are logically equivalent.  

 </LI> </UL> 
<HR><HR><A NAME="415"></A><H2>415.  Contingent </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Semantics of WFFs :: Consistent :: Contingent </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A formulat is contingent iff it is true on some of its valuations and not true on others.  

 </P>  
<HR><HR><A NAME="416"></A><H2>416.  Equivalent </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Semantics of WFFs :: Equivalent </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Two formulas are logically equivalent iff they have the same truth value on every valuation of both.  

 </P>  
<HR><HR><A NAME="417"></A><H2>417.  Invalid </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Semantics of Arguments :: Invalid </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> This definition of 'Counter Example' is in terms of 'Valuation'.

   </P> <P> A sequent or argument form is invalid iff there is at least one valuation on which its premises are true and its conclusion is not true.  

 </P>  
<HR><HR><A NAME="418"></A><H2>418.  Valid </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Semantics of Arguments :: Valid </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A sequent or argument form is <B>valid</B> iff there is no valuation on which its premises are true and its conclusion is not true.  

 </P>  
<HR><HR><A NAME="419"></A><H2>419.  Counter Example </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Language :: Semantics :: Semantics of Arguments :: Counter Example </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A <B>counterexample</B> to a sequent or argument form is a valuation on which its premises are true and its conclusion is not true.  

 </P>  
<HR><HR><A NAME="420"></A><H2>420.  Substitution Instance </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Basic Concepts :: Substitution Instance </I> ]</UL>
<H4>Alternate Names</H4>                

   <P> Instance

</P>  <H4>Description</H4>                

   <P> When a proposition P follows the form of another proposition Q, we say that P is an instance of Q.  

 </P>  
<HR><HR><A NAME="421"></A><H2>421.  Truth-Functional Form </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Basic Concepts :: Substitution Instance :: Truth-Functional Form </I> ]</UL>
<H4>Examples</H4><UL>            

   <LI> <PRE>
1:  (P &#8594; ((X &#8744; Y) &#8743; Q)) &#8596; (X &#8744; Y)
2:  (A &#8594; (B       &#8743; C)) &#8596; B

   So, 1 is an instance of 2.

Where,
   A = P
   B = X v Y
   C = Q
</PRE>
So, each symbol of a base form must map to a particular sub-wff of the instance.  

 </LI> </UL> 
<HR><HR><A NAME="422"></A><H2>422.  Set Form </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Basic Concepts :: Substitution Instance :: Set Form </I> ]</UL>
<H4>Description</H4>                

   <P> A set 'A' of propositions is an instance of a set 'B' of propositions if and only if:

</P>  <H4></H4><OL TYPE=a>     

   <LI> 'A' and 'B' have the same number of propositions.

   </LI> <LI> Each proposition in 'A' is an instance of a proposition in 'B'.

   </LI> <LI> Each proposition in 'B' has exactly one instance in 'A'.  

 </LI> </OL> 
<HR><HR><A NAME="423"></A><H2>423.  Argument Form </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Basic Concepts :: Substitution Instance :: Argument Form </I> ]</UL>
<H4>Description</H4>                

   <P> An argument form is the set of propositions of a sequent (premises and conclusion inclusive).

   </P> <P> If some propositions of an argument A, form a set P which is an instance of the set of premises of an argument form F, then we are entitled to add a new proposition (a conclusion) to P (and A), conformant to the conclusion of F.  Once added, the set A is an instance of the complete argument form F.

</P>  <H4>Notes</H4><UL>            

   <LI> The terms 'argument form' and 'inference rule' are interchangeable.  

 </LI> </UL> 
<HR><HR><A NAME="424"></A><H2>424.  Sequent </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Basic Concepts :: Sequent </I> ]</UL>


<H4>Notation</H4>                
<PRE>
      <I>premises</I>  &#8870;  <I>conclusion</I>

Where,
   <I>premises</I> is an optional comman separated set of wffs
   <I>conclusion</I> is a wff
</PRE>

 <H4>Description</H4>                

   <P> Not actually a part of FOL, but rather of <I>meta</I> logic, a sequent is an assertion that <I>conclusion</I> follows necessarily from zero or more premises (that the premises entail the conclusion).  Any sequent, that is not an inference rule, must be supported by a proof.  Once proven, a sequent can be treated as a new inference rule.

   </P> <P> The new symbol '&#8870;' is called a turnstile.  In logic it is usually read as 'entails'.  In this context, 'entails' means specifically that the conclusion is deducable from the premises.  

 </P>  
<HR><HR><A NAME="425"></A><H2>425.  Truth Tables </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Tables </I> ]</UL>
<H4>Description</H4>                

   <P> A truth table is a powerful tool used for analizing the semantics of the truth functional operators, wffs, sets of wffs and testing the validity of argument forms.  

 </P>  
<HR><HR><A NAME="426"></A><H2>426.  Construction </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Tables :: Construction </I> ]</UL>
<H4>Steps</H4><OL>            

   <LI> Write all the wff's, one after the other, along the top right side of the table.  If this truth table is for an argument, the conclusion is placed last.

   </LI> <LI> Write the sentence letters contained in the formulas in alphabetical order to the left of the formulas.

   </LI> <LI> List possible T/F value combinations for the sentence letters.  One combination per line, below the sentence letters.

   </LI> <LI> For each letter under the formula copy the list of T/F's from the corresponding letter to the left.

   </LI> <LI> Find the first operator for which all operand's T/F values are now known, fill in the column based upon the value(s) of the operands for that row.  Repeat until each operators in the formula has a column of T/F's below it.  The truth table is now completed.

</LI> </OL> <H4>Examples</H4>                

<PRE>
A B C  (A &#8743; B) &#8594; C
-------------------
F F F   F F F  T F
F F T   F F F  T T
F T F   F F T  T F
F T T   F F T  T T
T F F   T F F  T F
T F T   T F F  T T
T T F   T T T  F F
T T T   T T T  T T
</PRE>  

  
<HR><HR><A NAME="427"></A><H2>427.  Truth Functional Operators </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Tables :: Interpretation :: Truth Functional Operators </I> ]</UL>
<H4>Description</H4>                

   <P> Truth tables are useful for examining the semantics of the truth-functional operators.  A truth table provides a clearer picture of the truth functional operators than do the valuations.  

 </P>  
<HR><HR><A NAME="428"></A><H2>428.  Negation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Tables :: Interpretation :: Truth Functional Operators :: Negation </I> ]</UL>
<H4>Truth Table</H4>                

<PRE>
   P	&#172;P
   --------
   T	F
   F	T
</PRE>  

  
<HR><HR><A NAME="429"></A><H2>429.  Conjunction </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Tables :: Interpretation :: Truth Functional Operators :: Conjunction </I> ]</UL>
<H4>Truth Table</H4>                

<PRE>
A B   A &#8743; B
-------------------
F F    F F F
F T    F F T
T F    T F F
T T    T T T
</PRE>  

  
<HR><HR><A NAME="430"></A><H2>430.  Disjunction </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Tables :: Interpretation :: Truth Functional Operators :: Disjunction </I> ]</UL>
<H4>Truth Table</H4>                

<PRE>
   P	Q	P &#8744; Q
   ----------------
   T	T	T
   T	F	T
   F	T	T
   F	F	F
</PRE>  

  
<HR><HR><A NAME="431"></A><H2>431.  Conditional </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Tables :: Interpretation :: Truth Functional Operators :: Conditional </I> ]</UL>
<H4>Truth Table</H4>                

<PRE>
   P	Q	P &#8594; Q
   ----------------
   T	T	T
   T	F	F
   F	T	T
   F	F	T
</PRE>  

  
<HR><HR><A NAME="432"></A><H2>432.  Biconditional </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Tables :: Interpretation :: Truth Functional Operators :: Biconditional </I> ]</UL>
<H4>Truth Table</H4>                

<PRE>
   P	Q	P &#8596; Q
   ----------------
   T	T	T
   T	F	F
   F	T	F
   F	F	T
</PRE>  

  
<HR><HR><A NAME="433"></A><H2>433.  wffs </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Tables :: Interpretation :: wffs </I> ]</UL>
<H4>Description</H4>                

   <P> Truth tables are useful for examining the semantics of wffs.  A truth table provides an exhaustive list of valuations for the wff and provides a key to the semantic classification of the wff.

   </P> <P> Interpretation is dependent upon examining the column below the top-level operator.  There are three possiblities

</P>  <H4>Interpretation</H4><UL>            

   <LI> Tautological, All valuations are true.

   </LI> <LI> Inconsistent, All valuations are false.

   </LI> <LI> Contingent, At least one valuation is true and at least one valuation is false.  

 </LI> </UL> 
<HR><HR><A NAME="434"></A><H2>434.  Set of wffs </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Tables :: Interpretation :: Set of wffs </I> ]</UL>
<H4>Description</H4>                

   <P> Truth tables are useful for examining the semantics of sets of wffs.  A truth table provides an exhaustive list of valuations for the wffs and provides a key to the semantic classification of the set.

   </P> <P> Interpretation is dependent upon comparing the columns below the top-level operator of each wff.  There are two possiblities

</P>  <H4>Interpretation</H4><UL>            

   <LI> Inconsistent, There exists at least one row in which all valuations are true.

   </LI> <LI> Contingent, There does nnot exist even one row in which all valuations are true.  

 </LI> </UL> 
<HR><HR><A NAME="435"></A><H2>435.  Arguments </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Tables :: Interpretation :: Arguments </I> ]</UL>
<H4>Description</H4>                

   <P> A truth table can be construct for an argument form by listing all assumptions and the conclusion along the top right.  And filling in the truth table for each proposition as described above.

</P>  <H4>Interpretation</H4><UL>            

   <LI> Valid, Now locate the rows in which all assumptions evaluate to true.  If all of those rows also show that the conclusion is true, then the argument form is valid.

   </LI> <LI> Inconsistent,  If, on the other hand, one or more of those rows show that the conclusion is false,  then the argument is inconsistent.  There is a contradiction among the premises.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Notice that we are just looking for the cases that either do or do not conform to our definition of a valid deductive argument.  @see("Definition of Deduction", Argument::Analysis::Classification::By Logic Hierarchy::Deduction)  

 </LI> </UL> 
<HR><HR><A NAME="436"></A><H2>436.  Short Truth Table Method </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Short Truth Table Method </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Indirect Truth Table Method

</LI> </UL> <H4>Description</H4>                

   <P> This is not really a true tool.  It's a variation of truth tables.  

 </P>  
<HR><HR><A NAME="437"></A><H2>437.  Proposition Set </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Short Truth Table Method :: Proposition Set </I> ]</UL>
<H4>Description</H4>                

   <P> In constructing and interpreting a truth table, assume that the proposition set is consistent

</P>  <H4>Steps</H4><OL>            

   <LI> Horizontally list all the propositions

   </LI> <LI> Below the main operator of each proposition write T.

   </LI> <LI> Now work in reverse, trying to find the truth values of each of the proposition symbols.

   </LI> <LI> If we find a case were the proposition symbols have inconsistent values, then our original assumption is false; the propositions are inconsistent.

   </LI> <LI> If, on the other hand, we find that the proposition sysmbols have consistent values, then the propositions are consistent.

</LI> </OL> <H4>Notes</H4><UL>            

   <LI> Notice that this is a direct application of the <I>Reduction ad Absurdum</I> proof technique.  

 </LI> </UL> 
<HR><HR><A NAME="438"></A><H2>438.  Arguments </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Short Truth Table Method :: Arguments </I> ]</UL>
<H4>Description</H4>                

   <P> In constructing and interpreting a truth table, assume that the argument is invalid.

</P>  <H4>Steps</H4><OL>            

   <LI> Horizontally list all the premises then the conclusion.

   </LI> <LI> Below the main operator of the conclusion write F.  Below the main operator of each of the premises write T.

   </LI> <LI> Now work in reverse, trying to find the truth values of each of the proposition symbols.

   </LI> <LI> If we find a case were the proposition symbols have inconsistent values, then our original assumption is false; the argument is valid.

   </LI> <LI> If, on the other hand, we find that the proposition sysmbols have consistent values, then the argument is invalid.

</LI> </OL> <H4>Notes</H4><UL>            

   <LI> Notice that this is a direct application of the <I>Reduction ad Absurdum</I> proof technique.  

 </LI> </UL> 
<HR><HR><A NAME="439"></A><H2>439.  Truth Trees </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees </I> ]</UL>
<H4>Description</H4>                

   <P> A truth tree, like a truth table, is a powerful tool used for analizing the semantics of the truth functional operators, wffs, sets of wffs and testing the validity of argument forms.  Specifically, a truth tree is a way of listing all the valuations on which a wff or set of wffs is true.  

 </P>  
<HR><HR><A NAME="440"></A><H2>440.  Construction </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Construction </I> ]</UL>
<H4>Steps</H4><OL>            

   <LI> Begin by listing vertically the wff(s) to be evaluated.  The rules will decompose the list of wwf(s) into an inverted tree of atomic and negated atomic wffs.  The order in which rules are applied makes no difference to the final answer, but it is usually most efficient to apply nonbranching rules first.

</LI> </OL> <H4>Notes</H4><UL>            

   <LI> A path with an 'X' at the bottom is said to be <I>closed</I>.

   </LI> <LI> A checked item is an item that has been both already been decomposed and is no longer usable.  Some proposition forms can be decomposed but remain unchecked because they can be decomposed again in another way.  

 </LI> </UL> 
<HR><HR><A NAME="441"></A><H2>441.  Negation Rule (&#934; and &#172;&#934;) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Rules :: Negation Rule (&#934; and &#172;&#934;) </I> ]</UL>
<H4>Description</H4>                

   <P> If an open path contains both a formula and its negation, close the path (place an 'X' at the bottom of the path).  

 </P>  
<HR><HR><A NAME="442"></A><H2>442.  Negated Negation Rule (&#172;&#172;&#934;) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Rules :: Negated Negation Rule (&#172;&#172;&#934;) </I> ]</UL>
<H4>Description</H4>                

   <P> If an open path contains an unchecked wff of the form &#172;&#172;&#934;, check it and write &#934; at the bottom of every open that that contains this newly checked wff.  

 </P>  
<HR><HR><A NAME="443"></A><H2>443.  Conjunction Rule (&#934; &#8743; &#936;) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Rules :: Conjunction Rule (&#934; &#8743; &#936;) </I> ]</UL>
<H4>Description</H4>                

   <P> If an open path contains an unchecked wff of the form (&#934; &#8743; &#936;), check it and write &#934; and &#936; at the bottom of every open path than contains this newly checked wff.  

 </P>  
<HR><HR><A NAME="444"></A><H2>444.  Negated Conjunction Rule &#172;(&#934; &#8743; &#936;) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Rules :: Negated Conjunction Rule &#172;(&#934; &#8743; &#936;) </I> ]</UL>
<H4>Description</H4>                

   <P> If an open path contains an unchecked wff o the form &#172;(&#934; &#8743; &#936;), check it and split the bottom of each open path containing this newly checked wff into two branches, at the end of the first of which write  &#172;&#934; and at the end of the second of which write  &#172;&#936;.  

 </P>  
<HR><HR><A NAME="445"></A><H2>445.  Disjunction Rule (&#934; &#8744; &#936;) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Rules :: Disjunction Rule (&#934; &#8744; &#936;) </I> ]</UL>
<H4>Description</H4>                

   <P> If an open path contains an unchecked wff of the form (&#934; &#8744; &#936;), check it and split the bottom of each open path containing this newly checked wff into two branches, at the end of the first of which write &#934; and at the end of the second of which write &#936;.  

 </P>  
<HR><HR><A NAME="446"></A><H2>446.  Negated Disjunction Rule &#172;(&#934; &#8744; &#936;) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Rules :: Negated Disjunction Rule &#172;(&#934; &#8744; &#936;) </I> ]</UL>
<H4>Description</H4>                

   <P> If an open path contains anunchecked wff of the form (&#934; &#8744; &#936;), check it and write both &#172;&#934; and &#172;&#936; at the bottom of every open path that contains this newly checked wff.  

 </P>  
<HR><HR><A NAME="447"></A><H2>447.  Conditional Rule (&#934; &#8594; &#936;) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Rules :: Conditional Rule (&#934; &#8594; &#936;) </I> ]</UL>
<H4>Description</H4>                

   <P> If an open path contains an unchecked wff of the form (&#934; &#8594; &#936;), check it and split the bottom of each open path containing this newly checked wff into two branches, at the end of the first of which write &#172;&#934; and at the end of the second of which write &#936;.  

 </P>  
<HR><HR><A NAME="448"></A><H2>448.  Negated Conditional Rule &#172;(&#934; &#8594; &#936;) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Rules :: Negated Conditional Rule &#172;(&#934; &#8594; &#936;) </I> ]</UL>
<H4>Description</H4>                

   <P> If an open path contains an unchecked wff of the form &#172;(&#934; &#8594; &#936;), check it and write both &#934; and &#172;&#936; at the bottom of every open path that contains this newly checked wff.  

 </P>  
<HR><HR><A NAME="449"></A><H2>449.  Biconditional Rule (&#934; &#8596; &#936;) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Rules :: Biconditional Rule (&#934; &#8596; &#936;) </I> ]</UL>
<H4>Description</H4>                

   <P> If an open path contains an unchecked wff of the form (&#934; &#8596; &#936;), check it and split the bottom of each open path containing this newly checked wff into two branches, at the end of the first of which write both &#934; and &#936;, and at the en of the second of which write both &#172;&#934; and &#172;&#936;.  

 </P>  
<HR><HR><A NAME="450"></A><H2>450.  Negated Biconditional Rule &#172;(&#934; &#8596; &#936;) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Rules :: Negated Biconditional Rule &#172;(&#934; &#8596; &#936;) </I> ]</UL>
<H4>Description</H4>                

   <P> If an open path contains an unchecked wff of the form &#172;(&#934; &#8596; &#936;), check it and split the bottom of each open path containing this newly checked wff into two branches, at the end of the first of which write both &#934; and &#172;&#936;, and at the end of the second of which write both &#172;&#934; and &#936;.  

 </P>  
<HR><HR><A NAME="451"></A><H2>451.  Interpretation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Interpretation </I> ]</UL>
<H4>Concepts</H4>                

   <P> A <B>node</B> is any spot on the tree which contains one or more wffs.

   </P> <P> The <B>root</B> is the node at the top of the tree, the node that you started with when you first listed the wffs to be analized.

   </P> <P> A <B>leaf</B> is any node without any branches below it.

   </P> <P> A <B>path</B> is the sequence of nodes visited when traversing from the <I>root</I> to a <I>leaf</I>.

   </P> <P> A path is <B>closed</B> if it contains some atomic wff &#934; and it's negation &#172;&#934;.  Closed paths are marked by placing an 'X' at the bottom.

   </P> <P> An <B>open</B> path is one which does not contain an 'X' at the bottom.

</P>  <H4>Basic Interpretation</H4>                

   <P> Each open path from root to leaf represents one set of valuations under which the set of wffs is consistent.  For example, if a path contains P and &#172;Q, then the original wffs are consistent when P is true and Q is false.  A closed path represents no valuations.  Thus, if all paths are closed, the set of wffs must be inconsistent.

   </P> <P> The table below summarizes the interpretations.

<PRE>
	all closed		some open
------------------------------------------------------------
arg:	valid		invalid
List:	inc		cons
wwf:	inc		taut or Cont
&#172;wwf:	taut

cons, consistent
inc, inconsistent
taut, tautology
cont, contingent
</PRE>  

 </P>  
<HR><HR><A NAME="452"></A><H2>452.  Truth Functional Operators </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Interpretation :: Truth Functional Operators </I> ]</UL>
<PRE>
   &#172;P



   P &#8743; Q
   P
   Q



   P &#8744; Q

P           Q



   P &#8594; Q

&#172;P            Q



   P &#8596; Q

&#172;P             Q
P                &#172;Q
</PRE>  

 
<HR><HR><A NAME="453"></A><H2>453.  wffs </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Interpretation :: wffs </I> ]</UL>
<H4>Description</H4>                

   <P> This is actually the most tricky thing to evaluate.  First evaluation the wff.  This tells us if the wff is consistent or inconsistent.  If it's inconsistent, then we're done, but if it's consistent we still probably want to know if it's contingent or tautologous.  We can find this by negating the wff and building the tree from the new wff.  If this tree shows all closed, then the negated wff is inconsistent, therefore the original wff is tautologus.  Otherwise, the orignal wff is contingent.  

 </P>  
<HR><HR><A NAME="454"></A><H2>454.  Arguments </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Interpretation :: Arguments </I> ]</UL>
<H4>Description</H4>                

   <P> An argument can be evaluated by using indirect proof.  List the premises, then list the negation of the conclusion.  If all paths are closed (the propositions are inconsisten) then the original argument must be valid.  

 </P>  
<HR><HR><A NAME="455"></A><H2>455.  Path </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Definitions :: Path </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A path through a tree (in any stage of construction) is a complete column of formulas from the top to the bottom of the tree.  

 </P>  
<HR><HR><A NAME="456"></A><H2>456.  Finished </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Definitions :: Finished </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A path is finished if it is closed or if the only unchecked formulas it contains are sentence letters or negations of sentence letters so that no more rules apply to its formulas.  A tree is finished if all of its paths are finished.  

 </P>  
<HR><HR><A NAME="457"></A><H2>457.  Open Path </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Definitions :: Open Path </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> An open path is a path that has not been ended with an 'X'.  

 </P>  
<HR><HR><A NAME="458"></A><H2>458.  Closed Path </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Definitions :: Closed Path </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A closed path is a path that has been ended with an 'X'.  

 </P>  
<HR><HR><A NAME="459"></A><H2>459.  Occurence </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Truth Trees :: Definitions :: Occurence </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   A formula occurs on a path if 1) it is on that path and is not merely a subformula of some other formula on that path and 2) it is unchecked.  

  
<HR><HR><A NAME="460"></A><H2>460.  Algebra </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra </I> ]</UL>
<H4>Description</H4>                

   <P> The logic algebra allows for the algebraic manipulation of propositions.

</P>  <H4>Uses</H4><UL>            

   <LI> Simplification of complex propositions.

   </LI> <LI> Change the form of a proposition.  

 </LI> </UL> 
<HR><HR><A NAME="461"></A><H2>461.  Algebraic Equivalents </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Algebraic Equivalents </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Algebraic Properties

</LI> </UL> <H4>Description</H4>                

   <P> An <I>algebraic equivalent</I> is a logical expression which equates two propositions.  They have the form:  <PRE> P &#8596; Q</PRE>.  All algebraic equivalents are tautological.  

 </P>  
<HR><HR><A NAME="462"></A><H2>462.  Association </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Algebraic Equivalents :: Association </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> (P &#8744; (Q &#8744; R)) &#8596; ((P &#8744; Q) &#8744; R)

   </LI> <LI> (P &#8743; (Q &#8743; R)) &#8596; ((P &#8743; Q) &#8743; R)  

 </LI> </UL> 
<HR><HR><A NAME="463"></A><H2>463.  Commutation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Algebraic Equivalents :: Commutation </I> ]</UL>
<H4>FORMS</H4><UL>            

   <LI> (P &#8743; Q) &#8596; (Q &#8743; P)

   </LI> <LI> (P &#8744; Q) &#8596; (Q &#8744; P)  

 </LI> </UL> 
<HR><HR><A NAME="464"></A><H2>464.  Complement </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Algebraic Equivalents :: Complement </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> P &#8744; &#172;P &#8596; T

   </LI> <LI> P &#8743; &#172;P &#8596; F

   </LI> <LI> &#172;T &#8596; F

   </LI> <LI> &#172;F &#8596; T  

 </LI> </UL> 
<HR><HR><A NAME="465"></A><H2>465.  De Morgan's Law </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Algebraic Equivalents :: De Morgan's Law </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;(P &#8743; Q) &#8596; (&#172;P &#8744; &#172;Q)

   </LI> <LI> &#172;(P &#8744; Q) &#8596; (&#172;P &#8743; &#172;Q)  

 </LI> </UL> 
<HR><HR><A NAME="466"></A><H2>466.  Distribution </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Algebraic Equivalents :: Distribution </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> (P &#8743; (Q &#8744; R)) &#8596; ((P &#8743; Q) &#8744; (P &#8743; R))

   </LI> <LI> (P &#8744; (Q &#8743; R)) &#8596; ((P &#8744; Q) &#8743; (P &#8744; R))  

 </LI> </UL> 
<HR><HR><A NAME="467"></A><H2>467.  Double Negation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Algebraic Equivalents :: Double Negation </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Involution

</LI> </UL> <H4>Forms</H4><UL>            

   <LI> P &#8596; &#172;&#172;P  

 </LI> </UL> 
<HR><HR><A NAME="468"></A><H2>468.  Exportation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Algebraic Equivalents :: Exportation </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> ((P &#8743; Q) &#8594; R) &#8596; (P &#8594; (Q &#8594; R))  

 </LI> </UL> 
<HR><HR><A NAME="469"></A><H2>469.  Idempotence </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Algebraic Equivalents :: Idempotence </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Tautology

</LI> </UL> <H4>Forms</H4><UL>            

   <LI> P &#8596; (P &#8743; P)

   </LI> <LI> P &#8596; (P &#8744; P)  

 </LI> </UL> 
<HR><HR><A NAME="470"></A><H2>470.  Identity Laws </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Algebraic Equivalents :: Identity Laws </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> P &#8743; T &#8596; P

   </LI> <LI> P &#8743; F &#8596; F

   </LI> <LI> P &#8744; T &#8596; T

   </LI> <LI> P &#8744; F &#8596; P  

 </LI> </UL> 
<HR><HR><A NAME="471"></A><H2>471.  Material Implication </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Algebraic Equivalents :: Material Implication </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> (P &#8594; Q) &#8596; (&#172;P &#8744; Q)  

 </LI> </UL> 
<HR><HR><A NAME="472"></A><H2>472.  Transposition </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Algebraic Equivalents :: Transposition </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> (P &#8594; Q) &#8596; (&#172;Q &#8594; &#172;P)  

 </LI> </UL> 
<HR><HR><A NAME="473"></A><H2>473.  Simplification </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Simplification </I> ]</UL>
<H4>Description</H4>                

   <P> There are many ways to write a formula such that it evaluates to the same thing for an particular set of inputs.  That's how you determine if two wffs are equivalent.  Inevitably, some of these wffs will be simpler than others (involve fewer logic operations).  There are many applications that benefit from smaller formulas.  So, reduction of a wff to the simplest possible equivalent form is an important capability.  There are three common ways of simplifying a wff.  Algebraic Manipulation (which is a trial-and-error approach), Karnaugh Maps and Quinn-McClusky (which are algorithms).  

 </P>  
<HR><HR><A NAME="474"></A><H2>474.  Algebraic Manipulation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Simplification :: Algebraic Manipulation </I> ]</UL>
<H4>Description</H4>                

   <P> One way of simplifying a wff is through repeated applications of one or more algebraic equivalents.  Although this is largely a trial-and-error technique, the experienced person can easily see if a wff can or cannot be further simplified.

   </P> <P> Application of an algebraic equivalent is fairly simple.  Given a wff W which you wish to simplify, and an equivalent P &#8596; Q (Where, W, P and Q may be atomic or non-atomic wffs), if W is or contains an instance of P then, P may be replaced by the corresponding instance of Q.  Similarly, if W contains an instance of Q, then Q may be replaced by the corresponding instance of P.

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
   &#172;(&#172;P &#8744; Q)
   &#172;&#172;P &#8743; Q	(after applying DeMorgan's law the entrie wff)
   P &#8743; Q		(after applying Double Negation to &#172;&#172;P)
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="475"></A><H2>475.  Karnaugh Maps </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Simplification :: Karnaugh Maps </I> ]</UL>
<H4>Description</H4>                

   <P> These maps are a useful algorithmic technique for simplifying wffs.Their main advantages are that they are simple to learn.  The main disadvantage is that they are not convenient for a large number of proposition symbols.  

 </P>  
<HR><HR><A NAME="476"></A><H2>476.  Quinn-McClusky </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Simplification :: Quinn-McClusky </I> ]</UL>
<H4>Description</H4>                

   <P> Quine-McClusky is useful for simplifying a logic formula if the truth table contain at least one row for which the truth value does not matter.  Quine-McClusky is more appropriate for use by computers while Karnaugh Maps are more appropriate for use by humans.

</P>  <H4>Algorithm</H4><OL>            

   <LI> Produce a truth table for the expression to be minimized

<PRE>
   For expression truth values use: 't' (true), 'f' (false), 'x' (doesn't matter)
</PRE>

   </LI> <LI> List all those input combinations which evaluate to 't' or 'x'.

   </LI> <LI> Organize this list into groups according to the number of t's.  Order items in each group in binary order (t=1, f=0).

   </LI> <LI> Compare each term in a group to every term in the next group.  If they differ by only one position (t,f or -), combine them to form a new entry, such that the differing position is replaced by 'x'.  The new term is inserted into the appropriate group.  The two original terms are marked.

   </LI> <LI> Repeat step 4 until no more combinations can be formed.  Any unmarked combinations are called 'prime implicants'.

   </LI> <LI> Now create a table with all the original input forms across the top and the prime implicants down the left side.

   </LI> <LI> Check each row-column intersection.  If the prime implicant covers the column combination, place an 'x'.  (Eg.  If the prime implicant is '0xx0' and the column is '0000', place an 'x'.  If the column is '1010', it is not covered by the prime implicant so leave it blank.)

   </LI> <LI> Identify the essential prime implicants.  These are the prime implicants associated with the columns which contain only one 'x'.

   </LI> <LI> Then, if there is a column in which 'x' appears only in non-essential prime implicants, that column is not covered by the prime implicants.  Thus, we must also select one of those rows as an essential prime implicant.

   </LI> <LI> For each of the selected prime implicants, use P for 1's and &#172;P for 0's.  Ignore 'x'.  Eg.  If our original atomic propositions are A, B, C and D, then 0xx0 would be represented as:  &#172;A &#8743; &#172;D and --10 would be represented as C &#8743; &#172;D

   </LI> <LI> Finally, OR all these propositions together to form the minimal proposition.  

 </LI> </OL> 
<HR><HR><A NAME="477"></A><H2>477.  Negation (NNF) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Special Forms :: Normal Forms :: Negation (NNF) </I> ]</UL>
<H4>Description</H4>                

   <P> A wff of is in negation normal form if all occurrences of negation occur immediately in front of proposition symbols.

</P>  <H4>Examples</H4><UL>            

   <LI> &#172;P &#8743; &#172;B  

 </LI> </UL> 
<HR><HR><A NAME="478"></A><H2>478.  Disjunctive (DNF) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Special Forms :: Normal Forms :: Disjunctive (DNF) </I> ]</UL>
<H4>Description</H4>                

   <P> A wff is in disjunctive normal form if is a disjunction of one or more conjunctions of one or more literals.

</P>  <H4>Form</H4><UL>            

   <LI> (A &#8743; B) &#8744; (P &#8743; Q) &#8744; (X &#8743; Y) &#8744; ...  

 </LI> </UL> 
<HR><HR><A NAME="479"></A><H2>479.  Conjunctive (CNF) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Algebra :: Special Forms :: Normal Forms :: Conjunctive (CNF) </I> ]</UL>
<H4>Description</H4>                

   <P> A wff is in conjunctive normal form if it is a conjunction of one or more disjunctions of one or more literals.

</P>  <H4>Form</H4><UL>            

   <LI> (A &#8744; B) &#8743; (P &#8744; Q) &#8743; (X &#8744; Y) &#8743; ...  

 </LI> </UL> 
<HR><HR><A NAME="480"></A><H2>480.  Calculi </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi </I> ]</UL>
See <A HREF="#1281" TARGET="baseframe"> Definition of Calculus </A>.

<H4>Description</H4>                

   <P> The Propositional Calculus is a tool for building proofs in Propositional Logic.  This calculus is typically used for constructing proofs of consequence, but can also be used for constructing proofs of non-consequence (counter-examples).

   </P> <P> There are many calculi for the propositional logic advocated by different authors.  Each of these calculi has its own advantages.  Several of the more common calculi are listed in this section.  

 </P>  
<HR><HR><A NAME="481"></A><H2>481.  Two-Column Proof Structure </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Two-Column Proof Structure </I> ]</UL>
See <A HREF="#283" TARGET="baseframe"> Two Column Proofs </A>.

<H4>Description</H4>                

   <P> A two-column proof is a sequence of steps, one-per-line, of the following form:

<PRE>
	<step>	<proposition>	<reason>
</PRE>

   </P> <P> The <reason> consists of a comma-separated list of steps cited, followed by the name of the rule (usually abbreviated) which uses those cited steps to derive the propostion asserted on the current line.  Thus forming an instance of the complete argument form described by the rule.  In a few cases, such as when listing a premise, citations are omitted.  

 </P>  
<HR><HR><A NAME="482"></A><H2>482.  Fitch (Intr/Elim) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) </I> ]</UL>
<H4>Description</H4>                

   <P> The Fitch calculus is characterized by the nested sub proofs.  Fitch style calculi tend to mimic most closely how people reason naturally.  

 </P>  
<HR><HR><A NAME="483"></A><H2>483.  Subproofs </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Subproofs </I> ]</UL>
<H4>Description</H4>                

   <P> A Subproof is a means for presenting hypothetical arguments within a proof.  To indicate a subproof, the propositions are indented.  Every hypothetical argument (subproof) begins with a proposition call a hypothesis (<reason> is 'H' or 'hypothesis').

   </P> <P> Once a subproof is ended, all its steps are 'closed'.  No closed steps may be cited for use in new inferences.  The subproof as a whole, however, can be cited using the notation i-j, where i is the first step of the subproof and j is the last step.

   </P> <P> Subproofs can be nested without limit and any step in a subproof may cite any 'open' step before it (within or out of the subproof itself).  

 </P>  
<HR><HR><A NAME="484"></A><H2>484.  Inference Rules </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> An inference rule is an argument form which has been proven to be valid.  The Fitch Propositional Calculus has two fundamental inference rules for each operator; an Introduction rule and an Elimination rule.  The former is for introducing the operator into the proof, the later is for eliminating the operator.

   </P> <P> These ten rules are fundamental to Fitch, they themselves are provable, but not within Fitch itself.  Newly proven argument forms become new rules.  Thus, it is possible to prove ever more complex forms.  

 </P>  
<HR><HR><A NAME="485"></A><H2>485.  Sequent Notation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Sequent Notation </I> ]</UL>


<H4>Description</H4>                

   <P> A modified sequent notation has been developed to accomodate fitch calculus.

</P>  <H4>See Also</H4><UL>            

   <LI> See <A HREF="#424" TARGET="baseframe">sequent</A>.  

 </LI> </UL> 
<HR><HR><A NAME="486"></A><H2>486.  Multiple conclusion forms </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Sequent Notation :: Multiple conclusion forms </I> ]</UL>
<H4>Description</H4>                

   <P> Many rules permit more than one conclusion form.  In such a case, they are separated by commas (just as when there are multiple premises).  However, only one of the conclusion forms may be inferred at each step of the proof.  If multiple forms are needed, then the rule should be applied multiple times.

</P>  <H4>Examples</H4><UL>            

   <LI> P &#8743; Q  &#8870;  P, Q<BR>
   This states that it is possible to infer either a P or a Q from premise of the form P &#8743; Q.  

 </LI> </UL> 
<HR><HR><A NAME="487"></A><H2>487.  subproofs </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Sequent Notation :: subproofs </I> ]</UL>
<H4>Description</H4>                

   <P> Subproofs will be indicated as a parenthesized arguments within the main form.<BR>

</P>  <H4>Examples</H4><UL>            

   <LI> ( P  &#8870;  Q &#8743; &#172;Q )  &#8870;  &#172;P<BR>
   This states that we may infer the negation of the hypothesis of a subproof (hypothetical argument)  on the occasion that subproof ended in a contradiction.
  

 </LI> </UL> 
<HR><HR><A NAME="488"></A><H2>488.  Truth Assertions </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth Assertions </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A truth assertion is a wff whose inferential reasoning is an 'assumption'.  There a limits to the possible uses of truth assumptions.  For Propositional Calculus, there are two such instances.  All Truth Assertion inference rules have the following form.

   </P> <P> &#8870;  P  

 </P>  
<HR><HR><A NAME="489"></A><H2>489.  Assumption (A) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth Assertions :: Assumption (A) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8870;  P

</LI> </UL> <H4>Reason</H4><UL>            

   <LI> A

   </LI> <LI> Assume

   </LI> <LI> Assumption

   </LI> <LI> Given

</LI> </UL> <H4>Description</H4>                

   <P> Assumptions are propositions which we choose to accept as being true without logical proof.  The rest of the proof is developed out of these assumptions.

   </P> <P> Given a sequnt &#915;  &#8870;  &#934; where &#915; is a set of premises (or premise forms) and &#934; is a conclusion (or conclusion form), then each &#947; in &#915; is an assumption.  It is convential to start the proof with all the assumptions first, one per proof step.

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
   P &#8594; Q, &#172;Q  &#8870;  &#172;P

1.   P &#8594; Q	A
2.   &#172;Q		A
3.   ...
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="490"></A><H2>490.  Hypothesis (H) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth Assertions :: Hypothesis (H) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8870;  P

</LI> </UL> <H4>Reason</H4><UL>            

   <LI> H

   </LI> <LI> Hypothesis

   </LI> <LI> Suppose

   </LI> <LI> Supposition

</LI> </UL> <H4>Description</H4>                

   <P> A hypothesis is the first proposition of a subproof.  A hypothesis is actually a hypothesized truth rather than a real one.  Informal proofs have a similar concept where they introduce a hypothesis by stating, "Now, let's suppose (hypothesize) that ...".  Very often the goal is to prove the hypothesis wrong.

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
   ( P  &#8870;  Q )  &#8870;  P &#8594; Q

1.   |   P		H
2.   |   ...
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="491"></A><H2>491.  Truth-Functional Operators </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth-Functional Operators </I> ]</UL>
<H4>Description</H4>                

   <P> The Fitch calculus provides for two inference rules for each truth functional operator; and introduction rule and an elimination rule.  These rules respectively introduce or eliminate the the concerned operator from a given wff.  

 </P>  
<HR><HR><A NAME="492"></A><H2>492.  Elimination (&#172;E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth-Functional Operators :: Negation :: Elimination (&#172;E) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#172;&#172;&#934; &#8870; &#934;

</LI> </UL> <H4>Reason</H4><UL>            

   <LI> &#172;E

   </LI> <LI> &#172;Elim

   </LI> <LI> &#172;Elimination

</LI> </UL> <H4>Example</H4><UL>            

   <LI> <PRE>
   1.   &#172;&#172;P	A
   2.   P		&#172;E
   </PRE>  

 </LI> </UL> 
<HR><HR><A NAME="493"></A><H2>493.  Introduction (&#172;I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth-Functional Operators :: Negation :: Introduction (&#172;I) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> ( &#934; &#8870; &#936; &#8743; &#172;&#936; )  &#8870;  &#172;&#934;

</LI> </UL> <H4>Reason</H4><UL>            

   <LI> &#172;I

   </LI> <LI> &#172;Intro

   </LI> <LI> &#172;Introduction

   </LI> <LI> RAA

   </LI> <LI> Indirect Proof

</LI> </UL> <H4>Concept</H4>                

   <P> If from some hypothesis P, a contradiction is infered then you can conclude &#172;P.

</P>  <H4>Example</H4><UL>            

   <LI> <PRE>
   1.   P &#8594; Q	A
   2.   &#172;Q		A
   3.   |   P		H
   4.   |   Q		1,3 &#8594;E
   5.   |   Q ^ &#172;Q	2,4 &#8743;I
   6.   &#172;P		3-5 &#172;I
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> The traditional name for this rule is Reducio ad Absurdium (RAA).  

 </LI> </UL> 
<HR><HR><A NAME="494"></A><H2>494.  Elimination (&#8743;E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth-Functional Operators :: Conjunction :: Elimination (&#8743;E) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#934; &#8743; &#936; &#8870; &#934;, &#936;

</LI> </UL> <H4>Reason</H4><UL>            

   <LI> &#8743;E

   </LI> <LI> &#8743; Elim

   </LI> <LI> &#8743; Elimination

   </LI> <LI> Simp

</LI> </UL> <H4>Example</H4><UL>            

   <LI> <PRE>
   1.   P ^ Q	A
   2.   P		1 &#8743;E
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> The traditional name for this rule is Simplification (Simp)  

 </LI> </UL> 
<HR><HR><A NAME="495"></A><H2>495.  Introduction (&#8743;I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth-Functional Operators :: Conjunction :: Introduction (&#8743;I) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#934;, &#936; &#8870; &#934; &#8743; &#936;, &#936; &#8743; &#934;

</LI> </UL> <H4>Reason</H4><UL>            

   <LI> &#8743;I

   </LI> <LI> &#8743;Intro

   </LI> <LI> &#8743;Introduction

   </LI> <LI> Conj

   </LI> <LI> Adjunction

   </LI> <LI> Adj.

</LI> </UL> <H4>Example</H4><UL>            

   <LI> <PRE>
   1.   P		A
   2.   Q		A
   3.   P &#8743; Q	1,2 &#8743;I
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> The traditional name for this rule is Conjunction (Conj.).  

 </LI> </UL> 
<HR><HR><A NAME="496"></A><H2>496.  Elimination (&#8744;E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth-Functional Operators :: Disjunction :: Elimination (&#8744;E) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#934; &#8744; &#936;, &#934; &#8594; &#920;, &#936; &#8594; &#920; &#8870; &#920;

</LI> </UL> <H4>Reason</H4><UL>            

   <LI> &#8744;E

   </LI> <LI> &#8744;Elim

   </LI> <LI> &#8744;Elimination

   </LI> <LI> DD

</LI> </UL> <H4>Example</H4><UL>            

   <LI> <PRE>
   1.   P &#8744; Q	A
   2.   P &#8594; R	A
   3.   Q &#8594; R	A
   4.   R		1,2,3 &#8744;E
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> The traditional name for this form is Destructive Dilemma (DD).  

 </LI> </UL> 
<HR><HR><A NAME="497"></A><H2>497.  Introduction (&#8744;I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth-Functional Operators :: Disjunction :: Introduction (&#8744;I) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#934;  &#8870;  &#934; &#8744; &#936;, &#936; &#8744; &#934;

</LI> </UL> <H4>Reason</H4><UL>            

   <LI> &#8744;I

   </LI> <LI> &#8744;Intro

   </LI> <LI> &#8744;Introduction

   </LI> <LI> Add

</LI> </UL> <H4>Description</H4>                

   <P> This is an odd looking form.  The B in the conclusion wff represents any arbitrary form whatever.

</P>  <H4>Example</H4><UL>            

   <LI> <PRE>
   1.   P		A
   2.   P &#8744; Q	1 &#8744;I
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> In the form above, B represents any arbitrary proposition.  &#8744;I may actually introduce a new proposition symbol.

   </LI> <LI> The traditional name for this rule is Addition (Add.).  

 </LI> </UL> 
<HR><HR><A NAME="498"></A><H2>498.  Elimination (&#8594;E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth-Functional Operators :: Conditional :: Elimination (&#8594;E) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#934; &#8594; &#936;, &#934;  &#8870;  &#936;

</LI> </UL> <H4>Reason</H4><UL>            

   <LI> &#8594;E

   </LI> <LI> &#8594;Elim

   </LI> <LI> &#8594;Elimination

   </LI> <LI> MP

   </LI> <LI> MPP

</LI> </UL> <H4>Example</H4><UL>            

   <LI> <PRE>
   1.   F		A
   2.   F  &#8594;  G	A
   2.   G		1,2 &#8594;E
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> The traditional name for this form is Modus Ponendo Pones (MPP) usually shortened to Modus Ponens (MP).  

 </LI> </UL> 
<HR><HR><A NAME="499"></A><H2>499.  Introduction (&#8594;I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth-Functional Operators :: Conditional :: Introduction (&#8594;I) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> ( &#934;  &#8870;  &#936; )  &#8870;  &#934; &#8594; &#936;

</LI> </UL> <H4>Reason</H4><UL>            

   <LI> &#8594;I

   </LI> <LI> &#8594;Intro

   </LI> <LI> &#8594;Introduction

   </LI> <LI> CP

</LI> </UL> <H4>Example</H4><UL>            

   <LI> <PRE>
   1.   P &#8594; Q	A
   2.   Q &#8594; R	A
   3.   |   P		H
   4.   |   Q		1,3 &#8594;E
   5.   |   R		2,5 &#8594;E
   6.   P -> R	3-5 &#8594; I
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Conditional Introduction is traditionally known as Conditional Proof.  

 </LI> </UL> 
<HR><HR><A NAME="500"></A><H2>500.  Elimination (&#8596;E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth-Functional Operators :: Bi-conditional :: Elimination (&#8596;E) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#934; &#8596; &#936;  &#8870;  &#934; &#8594; &#936;, &#936; &#8594; &#934;

</LI> </UL> <H4>Reason</H4><UL>            

   <LI> &#8596;E

   </LI> <LI> &#8596;Elim

   </LI> <LI> &#8596;Elimination

   </LI> <LI> ME

</LI> </UL> <H4>Example</H4><UL>            

   <LI> <PRE>
   1.   P &#8596; Q	A
   2.   P &#8594; Q	1 &#8596;E
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> The traditional name for this rule is Material Equivalence (ME).  

 </LI> </UL> 
<HR><HR><A NAME="501"></A><H2>501.  Introduction (&#8596;I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Truth-Functional Operators :: Bi-conditional :: Introduction (&#8596;I) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#934; &#8594; &#936;, &#936; &#8594; &#934;  &#8870;  &#934; &#8596; &#936;, &#936; &#8596; &#934;

</LI> </UL> <H4>Reason</H4><UL>            

   <LI> &#8596;I

   </LI> <LI> &#8596;Intro

   </LI> <LI> &#8596;Introduction

   </LI> <LI> ME

</LI> </UL> <H4>Example</H4><UL>            

   <LI> <PRE>
   1.   P &#8594; Q	A
   2.   Q &#8594; P	A
   3.   P &#8596; Q	1,2 &#8596;I
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> The traditional name for this rule is Material Equivalence (ME).  

 </LI> </UL> 
<HR><HR><A NAME="502"></A><H2>502.  for Proofs of Consequence </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Building Proofs :: for Proofs of Consequence </I> ]</UL>
<H4>Proof Technique</H4><OL>            

   <LI> Assume each premise &#947; in &#915;.

   </LI> <LI> Apply valid inference rules to derive the desired conclusion &#936;.

</LI> </OL> <H4>See Also</H4><UL>            

   <LI> Definition of <A HREF="#278" TARGET="baseframe">Proof of Consequence</A>.  

 </LI> </UL> 
<HR><HR><A NAME="503"></A><H2>503.  Theorems </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Building Proofs :: for Proofs of Consequence :: Theorems </I> ]</UL>
<H4>Description</H4>                

   <P> As the definition says, the theorems are the truths of the system.  The truths of Propositional Logic are the tautologies.  Thus, the theorems of Propositional Logic are the tautologies.

   </P> <P> Because a theorem has no premises, the only way to begin a proof of a theorem is with a hypothesis or another theorem.

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
   &#8870;  P &#8744; &#172;P

1.   |   &#172;(P &#8744; &#172;P)	H (for &#172;I)
2.   |   &#172;P &#8743; P	1 DeMorgans
3.   P &#8744; &#172;P	1-2 &#172;I
</PRE>

</LI> </UL> <H4>See Also</H4><UL>            

   <LI> Definition of <A HREF="#1298" TARGET="baseframe">theorem</A>.  

 </LI> </UL> 
<HR><HR><A NAME="504"></A><H2>504.  Equivalences </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Building Proofs :: for Proofs of Consequence :: Theorems :: Equivalences </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> An equivalence theorm of a calculus is any theorem of the form '&#8870;  A &#8596; B'.

</P>  <H4>Elucidation</H4>                

   <P> Since all equivalences are theorems of the calculus, and all theorems of the calculus are tautologies, and can be proved from within the calculus, It follows that all equivalence can be proved from within the calculus.  Thus, all the equivalence rules of the traditional calculus can be proven as equivalence theorems.

   </P> <P> Deriving these rules is good practice for building proofs, but also can prove useful in future proof building.  These forms were selected out by scholars of classical logic because they are forms which appear often.  Thus, even in a modern deductive calculus such as Fitch or Gensler, they can be a very useful shortcut in an otherwise long and tedious proof.

   </P> <P> As an example, the Double Negative equivalence rule becomes the Double Negative Equivalence Theorem in Fitch which looks like this:  '&#8870;  P  &#8596;  &#172;&#172;P'.  The actual proof of this and all other equivalence theorems is left as an exercise.

</P>  <H4>Proving</H4>                

   <P> These proofs can usually be solved in two steps.  Prove the conditionals separately, 'A &#8594; B' and 'B &#8594; A', then use &#8596;I to prove the final biconditional.  If this technique turns out to be difficult, try beginning the proof by hypothesizing the negation of the equivalence and deriving a contradiction.

  

 </P>  
<HR><HR><A NAME="505"></A><H2>505.  for Proofs of Nonconsequence </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Building Proofs :: for Proofs of Nonconsequence </I> ]</UL>
<H4>Technique</H4>                

   <P> Given an argument of the form:  &#915;  &#8870;  &#934;; assume each premise, hypothesize &#172;&#934;.  Use inference rules to decompose as many of the propositions as possible into atomic and negated atomic propositions.  Asside from the actual proof, list all the proposition symbols used in the proof.  Assign to each of these proposition symbols a value of true or false depending upon weather you were able to decompose an atomic proposition or negated atomic proposition (Atomic propositions get a value of T, negated atomic propositions get a value of F).  Using this set of values for the proposition symbols in the original sequent, evaluate each of the premises and the conclusion.  If all the premises evaluate to true and the conclusion evaluates to false, you have found a counter-example to the original sequent.  The argument is thus invalid.

</P>  <H4>Examples</H4><UL>            

   <LI> Proof of nonconsequence<PRE>
      P &#8594; Q, Q  &#8870;  P

   1.   P &#8594; Q	A
   2.   Q		A
   3.   |   &#172;P	H (for &#172;I)

Thus, we have:

   P:   F
   Q:   T

Which results in the following valuations for the premises:

   P &#8594; Q		==>	T
   Q		==>	T

And the conclusion's valuation...

   P		F	==>	F

So, the we have a counter-example with all true premises and a false conclusion.  Thus the argument is invalid.
</PRE>

</LI> </UL> <H4>See Also</H4><UL>            

   <LI> Definition of <A HREF="#279" TARGET="baseframe">Proof of Nonconsequence</A>.  

 </LI> </UL> 
<HR><HR><A NAME="506"></A><H2>506.  Proof Strategies </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Building Proofs :: Proof Strategies </I> ]</UL>
<H4>Description</H4>                

   <P> Although Fitch makes building proof much easier than it used to be, building proofs can still be difficult.  This section lists some of the strategies that can be used to help in building a proof.  

 </P>  
<HR><HR><A NAME="507"></A><H2>507.  By Conclusion Form </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Building Proofs :: Proof Strategies :: By Conclusion Form </I> ]</UL>
<H4>Description</H4>                

   <P> This technique is usually only effective for Truth-Functional Calculus.  Simply look at the form of the conclusion, then follow the direction of the corresponding 'hint' below.

</P>  <H4>Atomic Formula</H4>                

   <P> If no other strategy is immediately apparent, hypothesize the negation of the concluion for &#172;I.  If this is successful, then the conclusion can be obtained after the &#172;I by &#172;E.

</P>  <H4>Negated Formula</H4>                

   <P> Hypothesize the conclusion without its negation sign for &#172;I.  If a contradiction follows, the conclusion can be obtained by &#172;I.

</P>  <H4>Conjunction</H4>                

   <P> Prove each of the conjunctions separately and then conjoin them with &#8743;I.

</P>  <H4>Disjunction</H4>                

   <P> Sometimes (though not often) a dusjunctive conclusion can be proved directly simply by proving one of its disjuncts and applying &#8744;I.  Otherwise, hypothesize the negation of the conclusion and try &#172;I.

</P>  <H4>Implication</H4>                

   <P> Hypothesize its antecedent and derive its consequent by &#8594;I.

</P>  <H4>Bi-implication</H4>                

   <P> Use &#8594;I twice to prove the two implications needed to obtain the conclusion by &#8596;I.  

 </P>  
<HR><HR><A NAME="508"></A><H2>508.  Other Techniques </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Building Proofs :: Proof Strategies :: Other Techniques </I> ]</UL>
<H4>Description</H4>                

   <P> If a disjunctive premise is present, try proving the implications needed to get the conclusion by VE.  If not, add an extra hypothesis whose negation would be useful as an additional premise in the proof.  Then, discharge this extra hypothesis as quickly as possible by &#172; I to obtain its negation.  If even this fails, try the same thing with a different hypothesis.  Eventually (If the form you are trying to prove is in face valid) you should hit on a hypothesis or series of hypotheses that will do the trick.  

 </P>  
<HR><HR><A NAME="509"></A><H2>509.  Contrapositive </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Building Proofs :: Proof Strategies :: Contrapositive </I> ]</UL>
<H4>Description</H4>                

   <P> Given that:   (P &#8594; Q)  &#8596;  (&#172;Q &#8594; &#172;P)<BR>
   where the second implication is the contrapositive of the first.

   </P> <P> It is often easier to prove the contrapositive than the implication itself.  

 </P>  
<HR><HR><A NAME="510"></A><H2>510.  3 or more equivalents </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Building Proofs :: Proof Strategies :: 3 or more equivalents </I> ]</UL>
<H4>Description</H4>                

   <P> To prove something like this:

<PRE>
   P1 <-> P2
   P2 <-> P3
   P3 <-> P1
</PRE>

   would require 6 proofs.  But it's equivalent to:

<PRE>
   P1 -> P2
   P2 -> P3
   P3 -> P1
</PRE>

   which requires that you only do three proofs.  

 </P>  
<HR><HR><A NAME="511"></A><H2>511.  Substrategies </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Building Proofs :: Proof Strategies :: Substrategies </I> ]</UL>
<H4>Description</H4>                

   <P> Various substrategies may develop if the conclusion is more complex.  Eg.  If the conclusion is &#172;P ^ &#172;Q, a conjunction of two negated statements, the typical strategy is to prove each of its conjuncts separately and then join them by ^I.  Since each conjunct is negated, the substrategy for proving each is to hypothesize it without its negation sign and use &#172;I.  Thus the proof consists of two hypothetical derivations for &#172;I followed by a step of ^I.  

 </P>  
<HR><HR><A NAME="512"></A><H2>512.  Reductio Ad Absurdum </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Building Proofs :: Proof Strategies :: Reductio Ad Absurdum </I> ]</UL>
<H4>Description</H4>                

   <P> It is possible to do all Fitch proofs using the RAA method.  Simply assume the premises then hypothesize the negation of the conclusion.  In most cases this technique results is the easiest strategy for filling in the proof to obtain the conclusion.  

 </P>  
<HR><HR><A NAME="513"></A><H2>513.  Theorems </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Building Proofs :: Proof Strategies :: Theorems </I> ]</UL>
<H4>Proving</H4>                

   <P> The most common way to construct a proof for a theorm within the calculus is to begin the proof with a hypothesis.  Alternatively, the proof can begin with a Theorem Introduction step (which is really just the same thing since the theorem being introduced was itself ultimately proven from a hypothetical frist step.  

 </P>  
<HR><HR><A NAME="514"></A><H2>514.  Equivalence Theorems </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Building Proofs :: Proof Strategies :: Theorems :: Equivalence Theorems </I> ]</UL>
<H4>Proving</H4>                

   <P> Proof of any equivalence theorem may be attempted through RAA.

   </P> <P> Or may be proved as a two-step process.  To prove '&#8870;  P  &#8596;  Q', you need two main subproofs.  The first should prove (and conclude) that, '&#8870;  P  &#8594;  Q'.  The second should prove (and conclude) that, '&#8870;  Q  &#8594;  P'.  The last step of the proof, then is a use of &#8596;I which combines the conclusions of the two proofs to ultimately conclude '&#8870;  P  &#8596;  Q'.  

 </P>  
<HR><HR><A NAME="515"></A><H2>515.  Derived Rules </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Derived Rules </I> ]</UL>
<H4>Description</H4>                

   <P> Any argument form (sequent) that is not a basic inference rule, may be used as a new rule provided it's validity has first been demonstrated with a proof.

</P>  <H4>Notes</H4><UL>            

   <LI> All of the inference rules of the traditional calculus can be proved from the rules of Fitch.  Once proven, then can be added to the set of inference rules usable within a Fitch proof.

   </LI> <LI> Deriving these rules is good practice for building proofs, but also can prove useful in future proof building.  These forms were selected out by scholars of classical logic because they are forms which appear often.  Thus, even in a modern deductive calculus such as Fitch or Gensler, they can be a very useful shortcut in an otherwise long and tedious proof.

   </LI> <LI> Two particularly useful rules are Contradiction and Reiteration.

<PRE>
         P, &#172;P  &#8870;  Q

   1.   P		A
   2.   &#172;P		A
   3.   |   &#172;Q	H
   4.   |   P &#8743; &#172;P	1,2 &#8743;I
   5.   &#172;&#172;Q		3-4 &#172;I
   6.   Q		5 &#172;E



         A  &#8870;  A

   1.   A		A
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> More information on derived rules, including Theorems is in 'Building Proofs'.  

 </LI> </UL> 
<HR><HR><A NAME="516"></A><H2>516.  Theorems </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Derived Rules :: Theorems </I> ]</UL>
<H4>Description</H4>                

   <P> Any substitution instance of a theorem may be introduced at any line of a proof.

</P>  <H4>Reason</H4>                

   <P> The <I>reason</I> for the step in the two-column proof is:  'TI <theorem name>'  (Theorem Introduction).  

 </P>  
<HR><HR><A NAME="517"></A><H2>517.  Equivalences </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Derived Rules :: Theorems :: Equivalences </I> ]</UL>
<H4>Use in Proofs</H4>                

   <P> Given an equivalence of the form '&#8870;  A &#8596; B' and a wff X that contains a substitution instance of A or B, you are entitled to infer a new wff X' which is the result of replacing the substitution of A or B in X with the corresponding substitution instance of B or A.  The <I>reason</I>, then, is the line cited and the name of the equivalence theorem.  

 </P>  
<HR><HR><A NAME="518"></A><H2>518.  Barwise </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Variants :: Barwise </I> ]</UL>
<H4>Description</H4>                

   <P> Barwise presents an alternate form of Fitch.  The inference rules that follow list only what's different about Barwise' version of Fitch.  Any inference rules not listed here are identical to those already presented.

</P>  <H4>Notes</H4><UL>            

   <LI> Barwise adds a new operator &#8869; (<I>contradiction</I>).  This also requires two new rules &#8869;I and .T/&#8707;  

 </LI> </UL> 
<HR><HR><A NAME="519"></A><H2>519.  Negation Introduction (&#172;I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Variants :: Barwise :: Negation Introduction (&#172;I) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> (P &#8870; &#8869;)  &#8870;  &#172;P

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> <PRE>
1   p &#8594; q		A
2   &#172;q		A
3   |   p		H
4   |   q		1,3 &#8594;E
5   |   q &#8743; &#172;q	2,4 &#8743;I
6   &#172;p		3-5 &#172;I
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="520"></A><H2>520.  Contradiction Introduction (&#8869;I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Variants :: Barwise :: Contradiction Introduction (&#8869;I) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> P, &#172;P  &#8870;  &#8869;  

 </LI> </UL> 
<HR><HR><A NAME="521"></A><H2>521.  Contradiction Elimination (&#8869;E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Variants :: Barwise :: Contradiction Elimination (&#8869;E) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8869;  &#8870;  P  

 </LI> </UL> 
<HR><HR><A NAME="522"></A><H2>522.  Disjunction Elimination (&#8744;E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Variants :: Barwise :: Disjunction Elimination (&#8744;E) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> A &#8744; B, (A  &#8870;  P), (B  &#8870;  P)  &#8870;  P  

 </LI> </UL> 
<HR><HR><A NAME="523"></A><H2>523.  Biconditional Elimination (&#8596;E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Variants :: Barwise :: Biconditional Elimination (&#8596;E) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> P &#8596; Q, P  &#8870;  Q  

 </LI> </UL> 
<HR><HR><A NAME="524"></A><H2>524.  Biconditional Introduction (&#8596;I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Fitch (Intr/Elim) :: Variants :: Barwise :: Biconditional Introduction (&#8596;I) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> (P  &#8870;  Q), (Q  &#8870;  P)  &#8870;  P &#8596; Q  

 </LI> </UL> 
<HR><HR><A NAME="525"></A><H2>525.  Gensler </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Gensler </I> ]</UL>
<H4>Description</H4>                

   <P> The Gensler calculi also contain nested sub proofs.  However, they do not tend to mimic closely how people reason naturally, some steps may be counter-intuitive.  Usually, Gensler proofs are fairly straight foreward in terms of finding the right sequence of steps to reach the conclusion.

   </P> <P> The Gensler calculus relies heavily upon Reductio ad Absurdum.  In most cases, the only way to complete a proof is to assume the premises and hypothesize the negation of the conclusion.  Use the S and I rules to try to reduce all propositions down to atomic formulas and their negations (similar to truthtrees).  The goal is to obtain two contradictory propositions in the subproof which thus allows us to terminate the subproof and infer the negation of the hypothesis.

   </P> <P> Genlser Two-Column Proofs includes a 'simplified' justification system.  A justification is 'A' (Assumption), 'H' (Hypothesis) or a list of citations without a reference to a rule.  (This can be troublesome as it's not always clear which rule is being applied).  

 </P>  
<HR><HR><A NAME="526"></A><H2>526.  Subproofs </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Gensler :: Subproofs </I> ]</UL>
<H4>Description</H4>                

   <P> A Subproof is a means for presenting hypothetical arguments within a proof.  A subproof's steps are indented.  Every hypothetical argument has a hypothesis (Justification is 'H' or 'hypothesis').

   </P> <P> There are only certain cases where subproofs can be used; this is indicated by the rule where a parenthsized sequent is a premise.  Eg.  the rule for &#172;I is:  ( P  &#8870;  Q &#8743; &#172;Q )  &#8870;  &#172;P, which says, "Given a hypothetical proof of a contradiction from some hypothesis P, we can infer &#172;P.

   </P> <P> Once a subproof is ended, all its steps are 'closed'.  No closed steps may be further cited for new conclusions.  The subproof as a whole, however, can be cited.  Eg. for &#172;I, on the step that we place the conclusion &#172;P (outside the subproof), we cite entire subproof using n-m notation, where n and  m are the first and last steps of the subproof.

   </P> <P> Subproofs can be nested without limit and any step in a subproof may cite any 'open' step before it (within or out of the subproof itself).

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
P &#8594; Q, &#172;Q  &#8870;  &#172;P

1.  P &#8594; Q	A
2.  &#172;Q		A
3.  |   P		H
4.  |   Q		1,3 &#8594;E
5.  |   Q &#8743; &#172;Q	2,4 &#8743;I
6.  &#172;P		3-5 &#172;I
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="527"></A><H2>527.  Sequent Notation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Gensler :: Inference Rules :: Sequent Notation </I> ]</UL>
<H4>Description</H4>                

   <P> A Subproof is a means for presenting hypothetical arguments within a proof.  A subproof's steps are indented.  Every hypothetical argument has a hypothesis (Justification is 'H' or 'hypothesis').

   </P> <P> Once a subproof is ended, all its steps are 'closed'.  No closed steps may be further cited for new conclusions.  The subproof as a whole, however, can be cited.  Eg. for &#172;I, on the step that we place the conclusion &#172;P (outside the subproof), we cite entire subproof using n-m notation, where n and  m are the first and last steps of the subproof.

   </P> <P> Subproofs can be nested without limit and any step in a subproof may cite any 'open' step before it (within or out of the subproof itself).

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
      P &#8594; Q, &#172;Q  &#8870;  &#172;P

   1.  P &#8594; Q		A
   2.  &#172;Q		A
   3.  |   P		H
   4.  |   Q		1,3 &#8594;E
   5.  |   Q &#8743; &#172;Q	2,4 &#8743;I
   6.  &#172;P		3-5 &#172;I
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="528"></A><H2>528.  Simplification Rules (S-Rules) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Gensler :: Inference Rules :: Simplification Rules (S-Rules) </I> ]</UL>
<H4>Description</H4>                

   <P> S-rules simplify statements

</P>  <H4>S-Rules</H4><UL>            

   <LI> P &#8743; Q  &#8870;  P, Q

   </LI> <LI> &#172;(P &#8744; Q)  &#8870;  &#172;P, &#172;Q

   </LI> <LI> &#172;(P &#8594; Q)  &#8870;  P, &#172;Q

   </LI> <LI> &#172;&#172;P  &#8870;  P

   </LI> <LI> (P &#8596; Q)  &#8870;  (P &#8594; Q), (Q &#8594; P)

   </LI> <LI> &#172;(P &#8596; Q)  &#8870;  (P &#8744; Q), &#172;(P &#8743; Q)  

 </LI> </UL> 
<HR><HR><A NAME="529"></A><H2>529.  Inference Rules (I-Rules) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Gensler :: Inference Rules :: Inference Rules (I-Rules) </I> ]</UL>
<H4>Description</H4>                

   <P> I-Rules infer something

</P>  <H4>I-Rules</H4><UL>            

   <LI> &#172;(P &#8743; Q), P  &#8870;  &#172;Q

   </LI> <LI> &#172;(P &#8743; Q), Q  &#8870;  P

   </LI> <LI> P &#8744; Q, &#172;P  &#8870;  Q

   </LI> <LI> P &#8744; Q, &#172;Q  &#8870;  P

   </LI> <LI> P &#8594; Q, P  &#8870;  Q

   </LI> <LI> P &#8594; Q, &#172;Q  &#8870;  &#172;P  

 </LI> </UL> 
<HR><HR><A NAME="530"></A><H2>530.  Traditional (Infer/Equiv) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) </I> ]</UL>
<H4>Description</H4>                

   <P> This section presents the traditional proof calculus developed and used by Frege and Russell.  Although this calculus is still used, it has many failings.  Proof strategies are rarely obvious forcing the logician to result to trial-and-error.  Proofs tend to be longer and intermediate derived propositions tend to be far more complex.  The reasoning process expressed by the traditional calculus does not feel 'natural', that is, it does not closely model the reasoning process that people use in every day life.  

 </P>  
<HR><HR><A NAME="531"></A><H2>531.  Absorption (Abs) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Inference Rules :: Absorption (Abs) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> P &#8594; Q  &#8870;  P &#8594; (P &#8743; Q)  

 </LI> </UL> 
<HR><HR><A NAME="532"></A><H2>532.  Addition (Add) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Inference Rules :: Addition (Add) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> P  &#8870;  P &#8744; Q  

 </LI> </UL> 
<HR><HR><A NAME="533"></A><H2>533.  Conjunction (Conj) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Inference Rules :: Conjunction (Conj) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> P, Q  &#8870;  P &#8743; Q  

 </LI> </UL> 
<HR><HR><A NAME="534"></A><H2>534.  Constructive Dilemma (CD) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Inference Rules :: Constructive Dilemma (CD) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> (P &#8594; Q) &#8743; (R &#8594; S), P &#8744; R  &#8870;  Q &#8744; S  

 </LI> </UL> 
<HR><HR><A NAME="535"></A><H2>535.  Contradiction (Con) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Inference Rules :: Contradiction (Con) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> P, &#172;P  &#8870;  Q

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> This states that given two contradictory propositions, you may infer anything at all.  

 </LI> </UL> 
<HR><HR><A NAME="536"></A><H2>536.  Destructive Dilemma (DD) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Inference Rules :: Destructive Dilemma (DD) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> P &#8744; Q, P &#8594; R, Q &#8594; R  &#8870;  R  

 </LI> </UL> 
<HR><HR><A NAME="537"></A><H2>537.  Disjunctive Syllogism (DS) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Inference Rules :: Disjunctive Syllogism (DS) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> P &#8744; Q, &#172;P  &#8870;  Q

  

 </LI> </UL> 
<HR><HR><A NAME="538"></A><H2>538.  Hypothetical Syllogism (HS) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Inference Rules :: Hypothetical Syllogism (HS) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> P &#8594; Q, Q &#8594; R  &#8870;  P &#8594; R  

 </LI> </UL> 
<HR><HR><A NAME="539"></A><H2>539.  Modus Ponens (MP) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Inference Rules :: Modus Ponens (MP) </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Modus Ponendo Ponens, <I>The mode of proving an assertion by assuming an assertion.</I>

   </LI> <LI> Mode of affirming

</LI> </UL> <H4>Forms</H4><UL>            

   <LI> P &#8594; Q, P  &#8870;  Q  

 </LI> </UL> 
<HR><HR><A NAME="540"></A><H2>540.  Modus Tolens (MT) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Inference Rules :: Modus Tolens (MT) </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Mode of denying

</LI> </UL> <H4>Forms</H4><UL>            

   <LI> P &#8594; Q, &#172;Q  &#8870;  &#172;P  

 </LI> </UL> 
<HR><HR><A NAME="541"></A><H2>541.  Simplification (Simp) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Inference Rules :: Simplification (Simp) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> P &#8743; Q  &#8870;  P, Q  

 </LI> </UL> 
<HR><HR><A NAME="542"></A><H2>542.  Equivalence Rules </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Equivalence Rules </I> ]</UL>
<H4>Descriptions</H4>                

   <P> Any tautology of the form '&#8870; A &#8596; B'.

</P>  <H4>Use in Proofs</H4>                

   <P> Given an equivalence of the form 'A &#8596; B' and a wff X that contains a substitution instance of A or B, you are entitled to infer a new wff X' which is the result of replacing the substitution of A or B in X with the corresponding substitution instance of B or A.  The justification, then, is the line cited and the name of the equivalence.  

 </P>  
<HR><HR><A NAME="543"></A><H2>543.  Association (ASSOC) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Equivalence Rules :: Association (ASSOC) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8870; (P &#8743; (Q &#8743; R)) &#8596; ((P &#8743; Q) &#8743; R)

   </LI> <LI> &#8870; (P &#8744; (Q &#8744; R)) &#8596; ((P &#8744; Q) &#8744; R)  

 </LI> </UL> 
<HR><HR><A NAME="544"></A><H2>544.  Commutation (COMM) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Equivalence Rules :: Commutation (COMM) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8870; (P &#8743; Q) &#8596; (Q &#8743; P)

   </LI> <LI> &#8870; (P &#8744; Q) &#8596; (Q &#8744; P)  

 </LI> </UL> 
<HR><HR><A NAME="545"></A><H2>545.  De Morgan's (DM) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Equivalence Rules :: De Morgan's (DM) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8870; &#172;(P &#8743; Q) &#8596; (&#172;P &#8744; &#172;Q)

   </LI> <LI> &#8870; &#172;(P &#8744; Q) &#8596; (&#172;P &#8743; &#172;Q)  

 </LI> </UL> 
<HR><HR><A NAME="546"></A><H2>546.  Distribution (DISTR) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Equivalence Rules :: Distribution (DISTR) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8870; (P &#8743; (Q &#8744; R)) &#8596; ((P &#8743; Q) &#8744; (P &#8743; R))

   </LI> <LI> &#8870; (P &#8744; (Q &#8743; R)) &#8596; ((P &#8744; Q) &#8743; (P &#8744; R))  

 </LI> </UL> 
<HR><HR><A NAME="547"></A><H2>547.  Double Negation (DN) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Equivalence Rules :: Double Negation (DN) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8870; P &#8596; &#172;&#172;P  

 </LI> </UL> 
<HR><HR><A NAME="548"></A><H2>548.  Exportation (EXP) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Equivalence Rules :: Exportation (EXP) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8870; ((P &#8743; Q) &#8594; R) &#8596; (P &#8594; (Q &#8594; R))  

 </LI> </UL> 
<HR><HR><A NAME="549"></A><H2>549.  Material Equivalence (ME) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Equivalence Rules :: Material Equivalence (ME) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8870; (P &#8596; Q) &#8596; ((P &#8594; Q) &#8743; (Q &#8594; P))

   </LI> <LI> &#8870; (P &#8596; Q) &#8596; ((P &#8743; Q) &#8744; (&#172;P &#8743; &#172;Q))  

 </LI> </UL> 
<HR><HR><A NAME="550"></A><H2>550.  Material Implication (MI) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Equivalence Rules :: Material Implication (MI) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8870; (P &#8594; Q) &#8596; (&#172;P &#8744; Q)  

 </LI> </UL> 
<HR><HR><A NAME="551"></A><H2>551.  Tautology (TAUT) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Equivalence Rules :: Tautology (TAUT) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8870; P &#8596; (P &#8743; P)

   </LI> <LI> &#8870; P &#8596; (P &#8744; P)  

 </LI> </UL> 
<HR><HR><A NAME="552"></A><H2>552.  Transposition (TRANS) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Traditional (Infer/Equiv) :: Equivalence Rules :: Transposition (TRANS) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8870; (P &#8594; Q) &#8596; (&#172;Q &#8594; &#172;P)  

 </LI> </UL> 
<HR><HR><A NAME="553"></A><H2>553.  Axiomatic Systems </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Axiomatic Systems </I> ]</UL>
<H4>Description</H4>                

   <P> An axiomatic system of deduction attempts to reduce the number of inference rules to a minimum and have a fixed set of tautologies which are called axiom schemas.  Below is an example:

</P>  <H4>Notes</H4><UL>            

   <LI> Today, axiomatic systems are largely stuied out of curiosity.  Logicians favour the natrual deduction systems (inference rules) for many reasons. 1) Natrual Deduction systems model more closely how people think in every-day reasoning (hence the name, "Natural Deduction").  2) Solution paths, that is the steps needed to get from a set of premises to a conclusion, are more clearly visible in natrual deduction -- especially for systems such as Fitch and Gensler.  3) Intermediate forms within the proof tend to be shorter.  This is advantageous because its easier to understand shorter forms and easier to reason from them.

   </LI> <LI> Though axiomatic systems are cumbersome to work with, they are easy to reason <I>about</I>.  That is, it is easy to prove that such a system is correct.  

 </LI> </UL> 
<HR><HR><A NAME="554"></A><H2>554.  Example of </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Propositional Logic (PL) :: Inference Theory :: Calculi :: Axiomatic Systems :: Example of </I> ]</UL>
<H4>Language</H4>                

   <P> This system uses only two operators: &#172;, &#8594;

   </P> <P> The remaining operators are then instituted as <I>shorthand</I> forms of various common forms.

<UL>
      &#934; &#8744; &#936;  &#8797;  &#172;&#934; &#8594; &#936;<BR>
      &#934; &#8743; &#936;  &#8797;  &#172;(&#934; &#8594; &#172;&#936;)
      &#934; &#8596; &#936;  &#8797;  (&#934; &#8594; &#936;) &#8743; (&#936; &#8594; PHI.)
</UL>

</P>  <H4>Inference Rules</H4><UL>            

   <LI> &#934;  &#8594;  &#936;,  &#934;  &#8870;  &#936;

</LI> </UL> <H4>Axioms</H4><UL>            

   <LI> &#934; &#8594; (&#936; &#8594; &#934;)
   </LI> <LI> (&#934; &#8594; (&#936; &#8594; &#920;)) &#8594; ((&#934; .-> &#936;) &#8594; (&#934; &#8594; &#920;))
   </LI> <LI> (&#172;&#936; &#8594; &#172;&#934;) &#8594; ((&#172;&#936; &#8594; &#934;) &#8594; &#936;)  

 </LI> </UL> 
<HR><HR><A NAME="555"></A><H2>555.  Predicate Logic </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic </I> ]</UL>


<H4>Description</H4>                

   <P> Predicate logic expands upon the concept of a Proposition Symbol by introducing the predicate.  Predicates allow us to represent more detail of the atomic proposition.  While Predicate Logic has no capabilities (in terms of kinds of reasoning) above those of Propositional Logic, it does add several concepts necessary for more advanced logics.  

 </P>  
<HR><HR><A NAME="556"></A><H2>556.  All Lexical Elements of Propositional Logic </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Syntax :: Lexical Elements :: All Lexical Elements of Propositional Logic </I> ]</UL>
<H4>Description</H4>                

   <P> First-Order Logic is an extension of Propositional Logic.  Therefore, all concepts of PL are also concepts of FOL, including the notational elements.  

 </P>  
<HR><HR><A NAME="557"></A><H2>557.  Object Symbol </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Syntax :: Lexical Elements :: Object Symbol </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Object

</LI> </UL> <H4>Description</H4>                

   <P> An object symbol is a label for some thing or concept which we call an <I>object</I>.

</P>  <H4>Notation</H4><UL>            

   <LI> <lower case letter><BR>
   where, <lower case letter> &#8712; { a, ..., z }.

</LI> </UL> <H4>Alternate Notations</H4><UL>            

   <LI> <name><BR>
   Where, <name> is a sequence of characters.  Usually the first character is a lower case letter.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> j	John

   </LI> <LI> p	pen  

 </LI> </UL> 
<HR><HR><A NAME="558"></A><H2>558.  Predicate Symbol </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Syntax :: Lexical Elements :: Predicate Symbol </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Predicate Letter

</LI> </UL> <H4>Notation</H4><UL>            

   <LI> <capital letter> <arguments><BR>
   Where, <capital letter> is a unique symbol.<BR>
   Where, <arguments> is one or more object and variable symbols.

</LI> </UL> <H4>Alternate Notations</H4><UL>            

   <LI> <name>( <arguments> )<BR>
   Where, <name> is a sequence of characters.  Usually the first character is an upper case letter.<BR>
   Where, <arguments> is a comma separated list of object symbols.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> Tx	 	x is tall.

   </LI> <LI> Lxy 	x likes y.  

 </LI> </UL> 
<HR><HR><A NAME="559"></A><H2>559.  Relation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Syntax :: Lexical Elements :: Predicate Symbol :: Relation </I> ]</UL>
<H4>Description</H4>                

   <P> A relation is simply a binary predicate.  All binary predicates are relations, and all relations are binary predicates.  

 </P>  
<HR><HR><A NAME="560"></A><H2>560.  Identity Predicate </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Syntax :: Lexical Elements :: Predicate Symbol :: Relation :: Identity Predicate </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> <obj or var> = <obj or var><BR>
   Where, <obj or var> is an object or variabe symbol.

</LI> </UL> <H4>Description</H4>                

   <P> A special predicate.  The only predicate reserved as part of the language.

</P>  <H4>Semantics</H4>                

   <P> The identity is the only predefined predicate symbol.  It asserts that the two symbols refer to the same object.

</P>  <H4>Examples</H4><UL>            

   <LI> j = john,	'j' and 'john' are both labels for John.  

 </LI> </UL> 
<HR><HR><A NAME="561"></A><H2>561.  The Dictionary </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Syntax :: The Dictionary </I> ]</UL>
<H4>Description</H4>                

   <P> The dictionary is a listing of definitions for constant object symbols and Predicate.  The dictionary is an essential part of any system which uses predicates, unlike Propositional Logic, the meaning of a predicate is necessary to determine the truth of each form it may take.

   </P> <P> For example, the following propositions have different truth values under each of the dictionaries presented.
<PRE>
   Tj,	T under the first, F under the second.
   Mm,	F under the first, T under the second
   Mj,	T under the first, T under the second
</PRE>

<PRE>
   Dictionary 1

   Tx,	x is tall.
   Mx,	x is male.
   Fx,	x is female.

   j,	John
   m,	mary
</PRE>

<PRE>
   Dictionary 2

   Tx,	x is a truck.
   Mx,	x is a Mitsubishi.
   Fx,	x is a Fiat.

   j,	John's vehicle
   m,	Mary's vehicle
</PRE>  

 </P>  
<HR><HR><A NAME="562"></A><H2>562.  Formation Rules </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Syntax :: Well-Formed Formulas (WFFs) :: Formation Rules </I> ]</UL>
<H4></H4><OL>            

   <LI> Any Proposition Symbol, or n-place Predicate followed by n object symbols, is a WFF.

   </LI> <LI> If &#934; is a wff, so is &#172;&#934;.

   </LI> <LI> If &#934; and &#936; are wffs, so are (&#934; &#8743; &#936;), (&#934; &#8744; &#936;), (&#934; &#8594; &#936;) and (&#934; &#8596; &#936;).

</LI> </OL> <H4>Notes</H4><UL>              

 </UL> 
<HR><HR><A NAME="563"></A><H2>563.  Conversational Implicature </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Syntax :: Formalization Hints :: Conversational Implicature </I> ]</UL>
<H4>Description</H4>                

   <P> It is often difficult to determine how to translate an english sentence into FOL because of implied meanings.  If a phrase has an implied meaning which can be canceled by a successive sentence, then the implied meaning should NOT be translated.

</P>  <H4>Examples</H4><UL>            

   <LI> Max is home unless Claire is at the library.<BR>
   Becomes:  &#172;Library(claire) -> Home(max)  

 </LI> </UL> 
<HR><HR><A NAME="564"></A><H2>564.  Noun Phrases </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Syntax :: Formalization Hints :: Noun Phrases </I> ]</UL>
<H4>Description</H4>                

   <P> Proper names translate as individual constant object symbols.

   </P> <P> Common nouns generally translate as monadic (unary) predicates, but some English common nouns may be analyzed as relations:  for example, 'is a father' is a monadic predicate that can be analyzed when the context warrants as a second-place predicate 'is a father of someone'  

 </P>  
<HR><HR><A NAME="565"></A><H2>565.  Adjectives </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Syntax :: Formalization Hints :: Adjectives </I> ]</UL>
<H4>Description</H4>                

   <P> Adjectives usually modify nouns:  'brick' school, 'blue' bell, etc.  And are translated, with a few exceptions, as monadic predicates.  Exceptions 'large mouse' (is a mouse but isn't large); 'suspected theief' (is suspected but may not be a thief); 'short giraffe' (is a giraffe but may not be short); similarly for adjectives such as 'heavy/light/tall/small, 'alleged' criminal, 'fake' diamonds, 'former' senator, etc.  The exceptional cases involve an interaction between the meaning of the adjective and the noun it modifies:  large for a mouse, small for an elephant, suspected of being a criminal, etc.  The exceptions can make a difference in evaluating arguments.  The argument
<PRE>
      Every philosopher is a lover.  So, Every good philosopher is a good lover.
</PRE>

   </P> <P> is not a valid argument, in contrast with
<PRE>
      All buildings are structures.  So, all brick buildings are brick structures.
</PRE>
   </P> <P> Translate the exceptions as single monadic predicates:  'large mouse':  Mx.

   </P> <P> Adjectives modified by adverbs such as 'very wealthy lawyer' are not translated as a conjunction like 'x is very &#8743; x is wealthy &#8743; x is a lawyer' but rather by 'x is very wealthy &#8743; x is a lawyer'.  

 </P>  
<HR><HR><A NAME="566"></A><H2>566.  Intransitive </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Syntax :: Formalization Hints :: Verb Phrases :: Intransitive </I> ]</UL>
<H4>Description</H4>                

   <P> These are verb phrases which do not take grammatical objects.  They are translated into monadic predicates.

<PRE>
	John laughed.
	Lj

	Sarah died
	Ds
</PRE>  

 </P>  
<HR><HR><A NAME="567"></A><H2>567.  Transitive </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Syntax :: Formalization Hints :: Verb Phrases :: Transitive </I> ]</UL>
<H4>Description</H4>                

   <P> These are verb phrases which take grammatical (direct) objects and may take indirect objects.  These translate into polyadic predicates.

<PRE>
	x loves y
	Lxy

	x gives y to z
	Gxyz
</PRE>  

 </P>  
<HR><HR><A NAME="568"></A><H2>568.  If objects are clausal complements </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Syntax :: Formalization Hints :: Verb Phrases :: If objects are clausal complements </I> ]</UL>
<H4>Description</H4>                

   <P> A verb whose objects are clausal complements takes as objects propositions or grammatical constructions closely related to propositions as objects:  'believe', 'know', 'persuade', 'hope', 'want', etc.  These create non-extensional contexts and are translated as atomic propositions.  

 </P>  
<HR><HR><A NAME="569"></A><H2>569.  Interpretation Structures </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Semantics :: Interpretation Structures </I> ]</UL>
<H4>Description</H4>                

   <P> In PL it is possible to arbitrarily assign values of T or F to Proposition Symbols.

   </P> <P> In Predicate Logic we work with predicates and objects.  These elements have semantic values outside of our logical language.  Therefore, when they are used to build propositions, they cannot arbitrarily be assigned truth values.  They already have truth values.  This introduces the concept of truth from interpretation.  Interpretation Structure is a property of Predicates.  

 </P>  
<HR><HR><A NAME="570"></A><H2>570.  Complexities </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Semantics :: Interpretation Structures :: Complexities </I> ]</UL>
<H4>Description</H4>                

   <P> Predicate Logic introduces complexities concerning the truth values of propositions.  These complexities are not present in PL.  

 </P>  
<HR><HR><A NAME="571"></A><H2>571.  Atomic Formulas </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Semantics :: Interpretation Structures :: Complexities :: Atomic Formulas </I> ]</UL>
<H4>Description</H4>                

   <P> Some atomic formulas (predicates) are treated as compound expressions.  These atomic formulae cannot arbitrarily be assigned truth values.  (E.g. Fab, 'a is the father of b'.)  Each truth value depends upon the interpretation of the symbols.  

 </P>  
<HR><HR><A NAME="572"></A><H2>572.  Universe of Discourse </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Semantics :: Interpretation Structures :: Universe of Discourse </I> ]</UL>
<H4>Alternate Names</H4>                

   <P> Universe

   </P> <P> Domain

</P>  <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The class of objects relative to which the name and predicate letters are interpreted.  

 </P>  
<HR><HR><A NAME="573"></A><H2>573.  Model </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Semantics :: Interpretation Structures :: Model </I> ]</UL>
<H4>Alternate Names</H4>                

   <P> Interpretation Structure

</P>  <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A model is a method to deal with the complexity introduced by atomic formulas.  The Nature of the interpretation depends upon the type of symbol.

<PRE>
	- Object Symbol
		interpretation:  some object
		example:  the moon

	- Proposition Symbol
		interpretation:  truth value

	- Unary Predicate Symbol
		interpretation:  Class of objects
		example:  tall, young, mammal

	- N-ary Predicate Symbol
		interpretation:  relation between n objects
		example:  a is-father-of b
</PRE>

   </P> <P> Given a model, every atomic formula p built up from those symboles is assigned a truth value according to the following:  

 </P>  
<HR><HR><A NAME="574"></A><H2>574.  Proposition Symbols </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Semantics :: Interpretation Structures :: Model :: Proposition Symbols </I> ]</UL>
<H4>Description</H4>                

   <P> If p consists of a single proposition symbol, the its truth value is the one specified directly by the model.  

 </P>  
<HR><HR><A NAME="575"></A><H2>575.  Unary Predicates </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Semantics :: Interpretation Structures :: Model :: Unary Predicates </I> ]</UL>
<H4>Description</H4>                

   <P> If p consists of a predicate symbol followed by a single name symbol, then p is assigned the value T if the object designated by the name symbol is a member of the class designated by the predicate symbol, otherwise p is assigned the value F.  

 </P>  
<HR><HR><A NAME="576"></A><H2>576.  N-Place Predicates </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Language :: Semantics :: Interpretation Structures :: Model :: N-Place Predicates </I> ]</UL>
<H4>Description</H4>                

   <P> If p consists of a predicate symbol followed by two or more name symbols, then p is assigned the value T if the objects designated by the name symbols stand in the relation designated by the predicate symbol, otherwise p is assigned F.  

 </P>  
<HR><HR><A NAME="577"></A><H2>577.  Inference Theories </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories </I> ]</UL>
<H4>Description</H4>                

   <P> For the most part, the tools of Predicate Logic are the same as those for Propositional Logic  

 </P>  
<HR><HR><A NAME="578"></A><H2>578.  Relations </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Basic Concepts :: Relations </I> ]</UL>
<H4>Descriptions</H4>                

   <P> Many place predicates are sometimes called "Relation Symbols"  and are taken to stand for relations.  Binary relations are often described by certain basic properties.  

 </P>  
<HR><HR><A NAME="579"></A><H2>579.  Reflexive </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Basic Concepts :: Relations :: Reflexive </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> R is reflexive   iff   &#8704;xRxx  

 </P>  
<HR><HR><A NAME="580"></A><H2>580.  Symmetric </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Basic Concepts :: Relations :: Symmetric </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> R is symmetric   iff   &#8704;x&#8704;y(Rxy &#8594; Ryx)  

 </P>  
<HR><HR><A NAME="581"></A><H2>581.  Transitive </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Basic Concepts :: Relations :: Transitive </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> R is transitive  iff   &#8704;x&#8704;y&#8704;z((Rxy &#8743; Ryz) &#8594; Rxz)  

 </P>  
<HR><HR><A NAME="582"></A><H2>582.  Irreflexive </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Basic Concepts :: Relations :: Irreflexive </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> R is irreflexive   iff   &#8704;x&#172;Rxx  

 </P>  
<HR><HR><A NAME="583"></A><H2>583.  Nonreflexive </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Basic Concepts :: Relations :: Nonreflexive </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> R is nonreflexive   iff   R is not reflexive and R is not irreflexive  

 </P>  
<HR><HR><A NAME="584"></A><H2>584.  Asymmetric </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Basic Concepts :: Relations :: Asymmetric </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> R is asymmetric   iff   &#8704;x&#8704;y(Rxy &#8594; &#172;Ryx)  

 </P>  
<HR><HR><A NAME="585"></A><H2>585.  antisymmetric </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Basic Concepts :: Relations :: antisymmetric </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> R is antisymmetric   iff   &#8704;x&#8704;y((Rxy &#8743; Ryx) &#8594; x=y)  

 </P>  
<HR><HR><A NAME="586"></A><H2>586.  Nonsymmetric </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Basic Concepts :: Relations :: Nonsymmetric </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> R is nonsymmetric   iff  R is not symmetric and R is not asymmetric.  

 </P>  
<HR><HR><A NAME="587"></A><H2>587.  Intransitive </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Basic Concepts :: Relations :: Intransitive </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> R is intransitive   iff   &#8704;x&#8704;y&#8704;z((Rxy &#8743; Ryz) &#8594; &#172;Rxz)  

 </P>  
<HR><HR><A NAME="588"></A><H2>588.  Nontransitive </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Basic Concepts :: Relations :: Nontransitive </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> R is nontransitive   iff   R is not transitive and R is not intransitive  

 </P>  
<HR><HR><A NAME="589"></A><H2>589.  Equivalence Relation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Basic Concepts :: Relations :: Equivalence Relation </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> R is an equivalence relation  iff   R is relfexive, symmetric and transitive.  

 </P>  
<HR><HR><A NAME="590"></A><H2>590.  Equivalence </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Basic Concepts :: Equivalence </I> ]</UL>
<H4>Description</H4>                

   <P> A predicate wff A is distinct from a predicate wff B iff the predicate symbols differ or at least one object symbol differs.

</P>  <H4>Examples</H4><UL>            

   <LI> Fac  and  Fac  are identical predicates.

</LI> </UL> <H4>Notes</H4><UL>            
  

 </UL> 
<HR><HR><A NAME="591"></A><H2>591.  Truth Tables </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Truth Tables </I> ]</UL>
<H4>Description</H4>                

   <P> Use of truth tables is identical to those for Propositional Logic.  However, because of Interpretation Structures, some uses of truth tables for PL are not helpful for predicate logic (e.g. analysis of a single proposition).  

 </P>  
<HR><HR><A NAME="592"></A><H2>592.  Truth Trees </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Truth Trees </I> ]</UL>
<H4>Description</H4>                

   <P> Use of truth trees for predicate logic is identical to that for propositional logic.  The only modification necessary is two additional decomposition rules for the identity predicate (=).  

 </P>  
<HR><HR><A NAME="593"></A><H2>593.  a=b </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Truth Trees :: Construction :: Decomposing :: Identity Predicate :: a=b </I> ]</UL>
<H4>Description</H4>                

   <P> If a wff of the form 'a=b' appear on an open path, then if another WFF P containting either 'a' or 'b' appears unchecked on that path, write at the bottom of the path any wff not already on the path wich is the result of replacing one or more occurrences of either of these name letters by the other in P.  Do not check either 'a=b' or P.  

 </P>  
<HR><HR><A NAME="594"></A><H2>594.  &#172;(a=b) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Truth Trees :: Construction :: Decomposing :: Identity Predicate :: &#172;(a=b) </I> ]</UL>
<H4>Description</H4>                

   <P> Close any open path on which a wff of the form &#172;(a = b) occurs.  

 </P>  
<HR><HR><A NAME="595"></A><H2>595.  Algebra </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Algebra </I> ]</UL>
<H4>Description</H4>                

   <P> Use of algebras for predicate logic is identical to that for PL.  

 </P>  
<HR><HR><A NAME="596"></A><H2>596.  Calculi </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Calculi </I> ]</UL>
<H4>Description</H4>                

   <P> Proofs for predicate logic are an extension to those for PL to handle the new concept of identity.  

 </P>  
<HR><HR><A NAME="597"></A><H2>597.  Sequent Notation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Calculi :: Fitch :: Inference Rules :: Sequent Notation </I> ]</UL>


<H4>Description</H4>                

   <P> A modified sequent notation has been developed to accomodate fitch calculus.

</P>  <H4>See Also</H4><UL>            

   <LI> See <A HREF="#424" TARGET="baseframe">sequent</A>.

   </LI> <LI> See <A HREF="#485" TARGET="baseframe">Propositional Logic Sequent</A>.  

 </LI> </UL> 
<HR><HR><A NAME="598"></A><H2>598.  Symbolic Replacement </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Calculi :: Fitch :: Inference Rules :: Sequent Notation :: Symbolic Replacement </I> ]</UL>
<H4>Symbolic Replacement</H4>                

   <P> The notation 'Pa/x', means "in formula P, replace with a, all occurrences of x."

   </P> <P> The notation 'Pa//x', means "in formula P, replace with a, one or more occurrences of x."  

 </P>  
<HR><HR><A NAME="599"></A><H2>599.  Elimination (=E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Calculi :: Fitch :: Inference Rules :: Identity :: Elimination (=E) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> a=c, Pa  |-  Pc//a  

 </LI> </UL> 
<HR><HR><A NAME="600"></A><H2>600.  Introduction (=I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Calculi :: Fitch :: Inference Rules :: Identity :: Introduction (=I) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8870;  &#945;=&#945;<BR>
   For any object symbol &#945;.  

 </LI> </UL> 
<HR><HR><A NAME="601"></A><H2>601.  Add Identity  (addI) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Calculi :: Gensler :: Identity Rules :: Add Identity  (addI) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8870;  a=a<BR>
   Where 'a' is any object symbol  

 </LI> </UL> 
<HR><HR><A NAME="602"></A><H2>602.  Substitution  (Subst) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Calculi :: Gensler :: Identity Rules :: Substitution  (Subst) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> a=b (or b=a), Fa  &#8870;  Fb  

 </LI> </UL> 
<HR><HR><A NAME="603"></A><H2>603.  Identity </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Predicate Logic :: Inference Theories :: Calculi :: Traditional :: Inference Rules :: Identity </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8870; a=a<BR>
   Where, a is an arbitrary object symbol.

   </LI> <LI> a = b  &#8596;  b = a<BR>
   Where, a and b are arbitrary object symbols.

   </LI> <LI> Fa, a = b  &#8870;  Fb<BR>
   Where, F is an arbitrary predicate symbol; a and b are arbitrary object symbols.  

 </LI> </UL> 
<HR><HR><A NAME="604"></A><H2>604.  First-Order Logic (FOL) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) </I> ]</UL>


<H4>Alternate Names</H4><UL>            

   <LI> Quantificational Logic

   </LI> <LI> First-Order Predicate Logic with Identity

</LI> </UL> <H4>Description</H4>                

   <P> First-Order Logic studies arguments whose validity depends on 'all', 'no', 'some' and similar notions.  

 </P>  
<HR><HR><A NAME="605"></A><H2>605.  All Lexical Elements of Predicate Logic </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Lexical Elements :: All Lexical Elements of Predicate Logic </I> ]</UL>
<H4>Description</H4>                

   <P> First-Order Logic is an extension of Predicate Logic.  Therefore, all concepts of Predicate Logic are also concepts of FOL, including the notational elements.  

 </P>  
<HR><HR><A NAME="606"></A><H2>606.  Object Symbol </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Lexical Elements :: Object Symbol </I> ]</UL>
<H4>Description</H4>                

   <P> An object symbol is a label for some thing or concept which we call an <I>object</I>.  First-Order Logic differentiates among two types of Object Symbol.

</P>  <H4>Notes</H4><UL>            

   <LI> It's common to refer to 'Constant Object Symbols' as 'Objects' or 'Object Symbols', and 'Variable Object Symbols' as 'Variables' or 'Variable Symbols'.  

 </LI> </UL> 
<HR><HR><A NAME="607"></A><H2>607.  Constant Object Symbol </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Lexical Elements :: Object Symbol :: Constant Object Symbol </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Constants

   </LI> <LI> Constant Symbols

   </LI> <LI> Name Letter/Symbol

</LI> </UL> <H4>Description</H4>                

   <P> A constant object symbol is an object symbol permenantly bound (assigned) to a particular object by definition.

</P>  <H4>Notation</H4><UL>            

   <LI> <lower case letter><BR>
   where, <lower case letter> &#8712; { a, ..., t }.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> j	John

   </LI> <LI> p	pen  

 </LI> </UL> 
<HR><HR><A NAME="608"></A><H2>608.  Variable Object Symbol </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Lexical Elements :: Object Symbol :: Variable Object Symbol </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Variables

   </LI> <LI> Variable Symbols

</LI> </UL> <H4>Notation</H4><UL>            

   <LI> <lower case letter><BR>
   Where, <lower case letter> &#8712; { u, ..., z }.  A place-holder for some arbitrary object.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> x

   </LI> <LI> y

   </LI> <LI> z  

 </LI> </UL> 
<HR><HR><A NAME="609"></A><H2>609.  Quantifier Notation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Lexical Elements :: Quantifier Notation </I> ]</UL>
<H4>Description</H4>                

   <P> Quantifiers bind (introduce) variables into an expression.  If any variable in an expression is not bound, then it is not a proposition.  

 </P>  
<HR><HR><A NAME="610"></A><H2>610.  Universal </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Lexical Elements :: Quantifier Notation :: Universal </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> &#8704;xP

</LI> </UL> <H4>Alternate Notations</H4><UL>            

   <LI> (x)Px

</LI> </UL> <H4>Semantics</H4>                

   <P> For all/any/each object(s) in the model, the truth of proposition P holds.

</P>  <H4>Examples</H4><UL>            

   <LI> &#8704;x(Bx &#8594; Fx) <BR>
   For all x, if x is a bird, then x has feathers.  

 </LI> </UL> 
<HR><HR><A NAME="611"></A><H2>611.  Existential </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Lexical Elements :: Quantifier Notation :: Existential </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> &#8707;xP

</LI> </UL> <H4>Semantics</H4>                

   <P> There exist object(s) in the model such that the truth of proposition P holds.

</P>  <H4>Examples</H4><UL>            

   <LI> &#8707;x(Bx &#8743; &#172;Fx) <BR>
   There exist x, such that x is a bird and x does not fly.  

 </LI> </UL> 
<HR><HR><A NAME="612"></A><H2>612.  The Dictionary </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: The Dictionary </I> ]</UL>
<H4>Description</H4>                

   <P> FOL needs a dictionary just like Predicate Logic does.  

 </P>  
<HR><HR><A NAME="613"></A><H2>613.  Formation Rules </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Well-Formed Formulas (WFFs) :: Formation Rules </I> ]</UL>
<H4></H4><OL>            

   <LI> Any Proposition Symbol, or n-place Predicate followed by n object symbols, is a WFF.

   </LI> <LI> If &#934; is a wff, so is &#172;&#934;.

   </LI> <LI> If &#934; and &#936; are wffs, so are (&#934; &#8743; &#936;), (&#934; &#8744; &#936;), (&#934; &#8594; &#936;) and (&#934; &#8596; &#936;).

   </LI> <LI> If &#934; is a wff containing an object symbol '&#945;', then any expression of the form &#8704;&#946;&#934;(&#946;/&#945;) or &#8707;&#946;&#934;(&#946;/&#945;) is a wff, where &#934;(&#946;/&#945;) is the result of replacing one or more of the occurrences of '&#945;' in &#934; by some variable '&#946;' not already in &#934;.

</LI> </OL> <H4>Notes</H4><UL>            

   <LI> Rule 4 may seem oddly formulated, but it is written such that it excludes free (unbound) variables.  Any FOL proposition with free variables is NOT well formed.

   </LI> <LI> Different variables do not necessarily designate different objects.

   </LI> <LI> Choice of variables makes no difference to meaning.

   </LI> <LI> The same variables used with two different quantifiers does not necessarily designate the same object in each case.

   </LI> <LI> Many, if not most, English sentences which mix universal and existential quantifiers are ambiguous.

   </LI> <LI> The order of consecutive quantifiers affects meaning only when universal and existential quantifiers are mixed.  

 </LI> </UL> 
<HR><HR><A NAME="614"></A><H2>614.  any- vs. every- </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Formalization Hints :: 'Any' & 'Every' :: any- vs. every- </I> ]</UL>
<H4>Description</H4>                

   <P> every- words (everyone, everybody, etc.) usually follow the english word order.

   </P> <P> any- words (anyone, anybody, etc.) usually translate into a universal quantifier at the beginning of the wff.  

 </P>  
<HR><HR><A NAME="615"></A><H2>615.  'Not any' & 'Not every' </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Formalization Hints :: 'Any' & 'Every' :: 'Not any' & 'Not every' </I> ]</UL>
<H4>Description</H4>                

   <P> Generally translate as follows:

<PRE>
      'not any ...'   ==>   'none ...'   ==>   '&#172;&#8707;x...'

      'not every ...'   ==>   '&#172;&#8704;x...'
</PRE>

   </P> <P> However, these rules are only guidelines and must be evaluated in each case.  For example:

<PRE>
      Not just any player can make that move.
</PRE>

   </P> <P> is correctly translated as

<PRE>
      &#172;&#8704;x(Px &#8594; Mx)
</PRE>  

 </P>  
<HR><HR><A NAME="616"></A><H2>616.  Synonyms for 'every' or 'all' </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Formalization Hints :: 'Any' & 'Every' :: Synonyms for 'every' or 'all' </I> ]</UL>
<H4>Description</H4>                

   <P> 'Any' is sometimes translated by a universal quantifier; in those cases, it can be paraphrased by 'every' or 'all'.

</P>  <H4>Examples</H4><UL>            

   <LI> Anyone who enjoys desserts likes Sara Lee.<BR>
   'Any' ==> 'Every'<BR>
   Everyone who enjoys desserts likes Sara Lee.<BR>
   &#172;&#8704;x(Ex &#8594; Lxs)  

 </LI> </UL> 
<HR><HR><A NAME="617"></A><H2>617.  Not a synonym </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Formalization Hints :: 'Any' & 'Every' :: Not a synonym </I> ]</UL>
<H4>Description</H4>                

   <P> Sometimes 'any' is not simply synonymous with 'every'.  For example, the following two sentences are translated differently.

<PRE>
	1.   If everyone enjoys desserts, Alicia does.  &#8704;xEx &#8594; Ea
	2.   If anyone enjoys desserts, Alicia Does.   &#8707;xEx &#8594; Ea
</PRE>

   </P> <P> Note:  these propositions are conditionals.  The first is trivially true, saying 'if everything is a certain way, then Alicia is that way'; the second is not trivially true, saying, in effect, that if at least one thing is a certain way, then Alicia is that way.  Now compare:

<PRE>
	3.   If anyone enjoys desserts, they enjoy coffee.  &#8704;x(Ex &#8594; Cx)
</PRE>

   </P> <P> This cannot be translated correctly by '&#8707;xEx -> Cx' because this has a free occurrence of 'x' and so is not a wff.  Note that the english has an anaphoric pronoun, 'they', linked to its antecedent 'anyone.'  When an English sentence contains a quantified noun phrase in the antecedent of a conditional, and there is an (explicit or implied) anaphoric pronoun linked to that quantified noun phrase, a universal quantifier whose scope is the entire conditional is required to translate the proposition.  

 </P>  
<HR><HR><A NAME="618"></A><H2>618.  Relative Clauses </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Formalization Hints :: Relative Clauses </I> ]</UL>
<H4>Description</H4>                

   <P> Relative clauses are expressions formed from propositions.  They begin with words like 'that' or with a 'wh-' word such as 'who', 'what', 'which', though often these words are omitted.  Often relative clauses modify nouns or noun phrases, like adjectives.  Translate them as conjoined to predicates symbolizing nouns:

<PRE>
	A car (that) I used to drive was scrapped.
	&#8707;x(Cx &#8743; (Dix &#8743; Sx))
</PRE>

   </P> <P> Restrictive relative clauses are constructed with appositive relative clauses (the latter are or could be set off by parentheses of commas), Both kinds are always translated as conjunctions.  But where there's a universal quantifier and the relative clause modifies the subject noun phrase, a restrictive relative clause such as (1) is translated differently from an appositive relative clause, (2), as illustrated by the examples:

<PRE>
	1.   All the aldermen who have been convicted are claiming to have medical problems.
	&#8704;x((Ax &#8743; C) &#8594; Mx )
	All convicted aldermen are claiming ...

	2.   All the aldermen, who have been convicted, are claiming to have medical problems.
	&#8704;x(Ax &#8594; Cx) &#8743; &#8704;x(Ax &#8594; Mx)
	All aldermen are claiming... and all have been convicted.
</PRE>  

 </P>  
<HR><HR><A NAME="619"></A><H2>619.  Prepositional Phrases </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Formalization Hints :: Prepositional Phrases </I> ]</UL>
<H4>Description</H4>                

   <P> Prepositions that modify nouns to form prepositional phrases commonly can be translated as second-place, thrids-place, etc predicates.

<PRE>
	Everyone 'from' chicago is a Bulls' fan
	&#8704;x((Px &#8743; Fxc) &#8594; Bx)

	I'm between a rock and a hard place
	&#8707;x(Rx &#8743; &#8707;y(Hy &#8743; Bixy ))
</PRE>  

 </P>  
<HR><HR><A NAME="620"></A><H2>620.  Superlatives </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Formalization Hints :: Superlatives </I> ]</UL>
<H4>Description</H4>                

   <P> The meaning of superlative forms can be expressed using only the comparative form of the same word by using the identity predicate.  For example, for a given UD, we can translate "FloJo is the fastest sprinter" as

<PRE>
	&#8704;x((Sx &#8743; &#172;x=f) &#8594; Ffx)

Where,
	Sx,   x is a sprinter
	Fxy,  x is faster than y
</PRE>

   </P> <P> In general, where 'Fxy' is 'x is more than y' we have

<PRE>
	a is the most F	&#8704;x(&#172;x=a &#8594; Fax)
	a is the least F	&#8704;x(&#172;x=a &#8594; Fxa)
</PRE>  

 </P>  
<HR><HR><A NAME="621"></A><H2>621.  Numerical Statements </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Formalization Hints :: Numerical Statements </I> ]</UL>
<H4>Description</H4>                

   <P> To specify the number of elements in a set P, notated |P|

   </P> <P> For any of the forms illustrated in this section, P may stand for more complex forms:

</P>  <H4>Forms</H4><UL>            

   <LI> |P| = 0<BR>
   &#172;&#8707;xPx

   </LI> <LI> |P| >= n<BR>
   &#8707;x&#8707;y...(Px &#8743; Py &#8743; ... &#8743; &#172;(x=y) &#8743; ...)<BR>
   That is, we need n quantifiers, and sum(1 ... n-1) inequalities.<BR>
   When n = 1, the form simplifies to:<BR>
   &#8707;xPx

   </LI> <LI> |P| <= n<BR>
   &#8704;x&#8704;y((Px &#8743; Py &#8743;...) &#8594; (x=y &#8744; ...))<BR>
   That is, we need n quantifiers, and sum(1 ... n-1) equalities.<BR>
   When n = 1, the form simplifies to:<BR>
   &#8704;x&#8704;y((Px &#8743; Py) &#8594; x=y)

   </LI> <LI> |P| = n<BR>
   &#8707;x&#8707;y(&#172;(x=y) &#8743; &#8704;z(Pz &#8596; (z=x &#8744; z=y)))<BR>
   That is, we need n existential quantifiers (1 universal), and sum(1 ... n-1) equalities.<BR>
   When n = 1, this form simplifies to:<BR>
   &#8707;x(Px &#8743; &#8704;y(Py &#8594; y=x ))

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> At least one N is Z.<BR>
   Here we use form 2 with n = 1  and Px = (Nx &#8743; Zx)<BR>
   &#8707;x(Nx &#8743; Zx)

   </LI> <LI> There are at most two N's that are Z's.<BR>
   Here we use form 3 with with n = 2 and Px = (Nx &#8743; Zx)<BR>
   &#8704;x&#8704;y(( (Nx &#8743; Zx) &#8743; (Ny &#8743; Zy) ) &#8594; x=y)  

 </LI> </UL> 
<HR><HR><A NAME="622"></A><H2>622.  Abbreviated Notation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Formalization Hints :: Numerical Statements :: Abbreviated Notation </I> ]</UL>
<H4>Description</H4>                

   <P> It's somewhat common to use an abbreviated notation.  (note that parenthesis indicate that the text should be a superscript.)

&#8707;<SUPER>>=n</SUPER>xPx
&#8707;<SUPER><=n</SUPER>xPx
&#8707;<SUPER>!n</SUPER>Px
&#8707;! Px

The third form says that there are exactly n objects.  The last form says that there is exactly one object.

</P>  <H4>Notes</H4><UL>            

   <LI> This notation is not part of the official First-Order language.  

 </LI> </UL> 
<HR><HR><A NAME="623"></A><H2>623.  Expressive Limitations of FOL </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Syntax :: Formalization Hints :: Expressive Limitations of FOL </I> ]</UL>
<H4>Description</H4>                

   <P> The completeness proven in matalogic refers to the 'semantic completelness' of FOL.  That is, FOL is complete in the sense that its inference rules generate proofs for all argument forms valid in virtue of the semantics of the identity predicate, the truth functional operators, and the universal and existential quantifiers.  However, it is incomplete in the sense that there are valid argument forms whose validity depends on the semantics of other sorts of expressions.  

 </P>  
<HR><HR><A NAME="624"></A><H2>624.  Interpretation Structures </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Semantics :: Interpretation Structures </I> ]</UL>
<H4>Description</H4>                

   <P> The Interpretation Structure of FOL expands upon that for Predicate Logic .
  

 </P>  
<HR><HR><A NAME="625"></A><H2>625.  Complexities </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Semantics :: Interpretation Structures :: Complexities </I> ]</UL>
<H4>Description</H4>                

   <P> FOL introduces one additional complexity concerning the truth values of sentences in FOL.  

 </P>  
<HR><HR><A NAME="626"></A><H2>626.  Quantifiers </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Semantics :: Interpretation Structures :: Complexities :: Quantifiers </I> ]</UL>
<H4>Description</H4>                

   <P> Quantifiers introduce a complexity.  Though Venn diagrams can help to understand the semantics of simple quantified statemnts.  Statements that mix quantifiers cannot be represented in such diagrams.  

 </P>  
<HR><HR><A NAME="627"></A><H2>627.  a-variant (alpha-variant) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Semantics :: Interpretation Structures :: a-variant (alpha-variant) </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> let @ stand for alpha in the text below

   </P> <P> An @-variant is a method to deal with the complexity of interpretation introduced by quantifiers.

   </P> <P> Let M be any model and suppose @ is some name symbol.  And @-variant of M is defined as any model that results from M by freely interpreting @ as an object in the universe of M.  If M does not assign any interpretation to @, then an @-variant would be a slightly "richer" model than M:  in addition to the interpretations provided by M, it will provide an interpretation for @ as well.  On the other hand if M already did assign an interpretation to @, then an @-variant of M will simply represent a new way of interpreting @, keeping everything else exactly as in M.  In this case it is not assumed that the new interpretation of @ is different from its interpretation in M:  the interpretation may be the same, in which case the @-variant coincides with M itself.  So, whenever a model assigns an interpretation to a name symbol @, it counts as an @-variant of itself.

   </P> <P> Using the notion of @-variant, the truth conditions for quantified sentences are now defined as follows:  

 </P>  
<HR><HR><A NAME="628"></A><H2>628.  Universal Quatification </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Semantics :: Interpretation Structures :: a-variant (alpha-variant) :: Universal Quatification </I> ]</UL>
<H4>Description</H4>                

   <P> The quantified expression, 'Ab P' is true in M if P(@/b) is true in every @-variant of M, where @ is the first name symbol in alphabetic order not already in P;  If P(@/b) is false in some @-variant of M, then 'Ab P' is false in M.  

 </P>  
<HR><HR><A NAME="629"></A><H2>629.  Existential Quantification </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Language :: Semantics :: Interpretation Structures :: a-variant (alpha-variant) :: Existential Quantification </I> ]</UL>
<H4>Description</H4>                

   <P> The quantified expression 'Eb P' is true in M if P(@/b) is true in some @-variant of M; if P(@/b) is false in every @-variant of M, then 'Eb P' is false in M.  

 </P>  
<HR><HR><A NAME="630"></A><H2>630.  Inference Theories </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories </I> ]</UL>
<H4>Description</H4>                

   <P> With the exception of Truth Tables, all tools for working with Propositional Logic are also effective in First-Order Logic.

</P>  <H4>Notes</H4>                

   <P> Truth tables are not usable tools for first-order logic because It's not valid to arbitrarily assign truth values to predicates in the same way that proposition symbols may be arbitrarily assigned truth values.  Even more the case, it's not possible to break a quantified wff into atomic formulas which you can then list in as columns on the left of the table.  Any such predicate forms with free variables are not propositions, thus do not have valuations.  

 </P>  
<HR><HR><A NAME="631"></A><H2>631.  Free Variable </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Basic Concepts :: Free Variable </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A variable in an FOL proposition which is not bound by a quantifier.  

 </P>  
<HR><HR><A NAME="632"></A><H2>632.  Bound Variable </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Basic Concepts :: Bound Variable </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A variable in an expression which has been introduced by a quantifier.  

 </P>  
<HR><HR><A NAME="633"></A><H2>633.  Equivalence </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Basic Concepts :: Equivalence </I> ]</UL>
<H4>Description</H4>                

   <P> Two quantified wffs are considered equivalent iff they are identical.

</P>  <H4>Notes</H4><UL>            

   <LI> Variable names are incidental.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> &#8704;x  

 </LI> </UL> 
<HR><HR><A NAME="634"></A><H2>634.  Instance </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Basic Concepts :: Instance </I> ]</UL>
<H4>Description</H4>                

   <P> When a proposition P follows the form of another proposition Q, we say that P is an instance of Q.  

 </P>  
<HR><HR><A NAME="635"></A><H2>635.  First-Order Form </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Basic Concepts :: Instance :: First-Order Form </I> ]</UL>
<H4>Description</H4>                

   <P> A wff P is an instance of a quantified wff S iff S has a quantifier Q binding variable x and P is equivalent to or and instance of S(a/x), where a is some constant object symbol in P.

</P>  <H4>Examples</H4>                

<PRE>
1:           ( Pa -> Rab ) <-> Qb
2: &#8704;x &#8707;y( Px -> Rxy ) <-> Qy

1 is an instance of 2.

Where,
   x = a
   y = b
</PRE>  

  
<HR><HR><A NAME="636"></A><H2>636.  Truth Trees </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Truth Trees </I> ]</UL>
<H4>Description</H4>                

   <P> Truth trees use the reducio ad absurdum (indirect method of proof).  The formula being tested is first denied (negated).  Each molecular formula is decomposed until the result is either an atomic proposition or its negation.  If and only if the tree reveals a contradiction on every branch is the formula a tautology.  

 </P>  
<HR><HR><A NAME="637"></A><H2>637.  Setting Up </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Truth Trees :: Construction :: Setting Up </I> ]</UL>
<H4>Steps</H4><OL>            

   <LI> Write a list for wff's, a wff or negated wff.  (What you write depends upon what you want to find.  see interpretations for more info).

   </LI> <LI> To make a refutation tree for an argument list all the premises as separate sentences, then add the negations of the conclusion to the end of the list.  

 </LI> </OL> 
<HR><HR><A NAME="638"></A><H2>638.  Decomposing </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Truth Trees :: Construction :: Decomposing </I> ]</UL>
<H4>Description</H4>                

   <P> The order in which rules are applied makes no difference to the final answer, but it is usually most efficient to apply nonbranching rules first.  

 </P>  
<HR><HR><A NAME="639"></A><H2>639.  Truth-Functional Operators </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Truth Trees :: Construction :: Decomposing :: Truth-Functional Operators </I> ]</UL>
<H4>Description</H4>                

   <P> All decoposition rules for the Truth-Functional operators of Propositional Logic are valid for FOL.  

 </P>  
<HR><HR><A NAME="640"></A><H2>640.  Identity Predicate (=) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Truth Trees :: Construction :: Decomposing :: Identity Predicate (=) </I> ]</UL>
<H4>Description</H4>                

   <P> All decoposition rules for the identity predicate of Predicate Logic are valid for FOL.  

 </P>  
<HR><HR><A NAME="641"></A><H2>641.  &#8704;xP </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Truth Trees :: Construction :: Decomposing :: &#8704;xP </I> ]</UL>
<H4>Description</H4>                

   <P> If a wff of the form &#8704;xP appears on an open pathe, then if 'a' is a name letter that occurs in some wff on that path, write P(a/x) (the result of replacing all occurrences of 'x' in P by 'a') at the bottom of the path.  If no wff containing a name ltter appears on the path, then choose some name letter 'a' and write P(a/x) at the bottom of the path.  In either case, do not check &#8704;xP.  

 </P>  
<HR><HR><A NAME="642"></A><H2>642.  &#172;&#8704;xP </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Truth Trees :: Construction :: Decomposing :: &#172;&#8704;xP </I> ]</UL>
<H4>Description</H4>                

   <P> If an unchecked wff of the form &#172;&#8704;xP appears on an open path, check it and write &#8707;x&#172;P at the bottom of every open path that contains the newly checked wff.  

 </P>  
<HR><HR><A NAME="643"></A><H2>643.  &#8707;xP </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Truth Trees :: Construction :: Decomposing :: &#8707;xP </I> ]</UL>
<H4>Description</H4>                

   <P> If an unchecke wff of the form &#8707;xP appears on an open path, check it.  Then choose a name letter 'a' that does not yet appear anywhere on that path and write P(a/x) (the result of replacing every occurrence of 'x' in P by 'a') at the bottom of every open path that contains the newly checked wff.  

 </P>  
<HR><HR><A NAME="644"></A><H2>644.  &#172;&#8707;xP </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Truth Trees :: Construction :: Decomposing :: &#172;&#8707;xP </I> ]</UL>
<H4>Description</H4>                

   <P> If an unchecked wff of the form &#172;&#8707;xP appears on an open path, check it and write &#8704;x&#172;P at the bottom of every open path that contains the newly checked wff.  

 </P>  
<HR><HR><A NAME="645"></A><H2>645.  Algebraic Equivalents </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Algebra :: Algebraic Equivalents </I> ]</UL>
<H4>Description</H4>                

   <P> An algebraic equivalent is a tautology whose top-level operator is a biconditional.  Equivalents can be algebraically applied to any wff or sub-wff to get an equivalent formula.  

 </P>  
<HR><HR><A NAME="646"></A><H2>646.  DeMorgan's for Quantifiers </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Algebra :: Algebraic Equivalents :: Quantified Logic :: DeMorgan's for Quantifiers </I> ]</UL>
<H4>Equivalences</H4><UL>            

   <LI> &#172;&#8704;xPx  &#8596;  &#8707;x&#172;Px

   </LI> <LI> &#172;&#8707;xPx  &#8596;  &#8704;x&#172;Px  

 </LI> </UL> 
<HR><HR><A NAME="647"></A><H2>647.  &#8704;, &#8707;, &#8743; and &#8744; </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Algebra :: Algebraic Equivalents :: Quantified Logic :: &#8704;, &#8707;, &#8743; and &#8744; </I> ]</UL>
<H4>Equivalences</H4><UL>            

   <LI> &#8704;x(Px &#8743; Qx)  &#8596;  (&#8704;xPx &#8743; &#8704;xQx)

   </LI> <LI> &#8707;x(Px &#8744; Qx)  &#8596;  (&#8707;xPx &#8744; &#8707;xQx)  

 </LI> </UL> 
<HR><HR><A NAME="648"></A><H2>648.  Variables </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Algebra :: Algebraic Equivalents :: Variables </I> ]</UL>
<H4>Equivalences</H4><UL>            

   <LI> &#8704;xPx  &#8596;  &#8704;yPy

   </LI> <LI> &#8707;xPx  &#8596;  &#8707;yPy

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> This just states that chaning the variable does not change the wff.  

 </LI> </UL> 
<HR><HR><A NAME="649"></A><H2>649.  Identity </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Algebra :: Algebraic Equivalents :: Identity </I> ]</UL>
<H4>Description</H4><UL>            

   <LI> &#8704;xP  &#8596;  P

   </LI> <LI> &#8707;xP  &#8596;  P

   </LI> <LI> &#8704;x(P &#8744; Qx)  &#8596;  (P &#8744; &#8704;xQx)

   </LI> <LI> &#8707;x(P &#8743; Qx)  &#8596;  (P &#8743; &#8707;xQx)  

 </LI> </UL> 
<HR><HR><A NAME="650"></A><H2>650.  Quantifier Scope </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Algebra :: Algebraic Equivalents :: Quantifier Scope </I> ]</UL>
<H4>Description</H4>                

   <P> Nested quantifiers may combine with the truth-functional operators in many equivalent ways.

</P>  <H4>Equivalences</H4>                

   <P> Given that @ represents conjunction or disjunction, the following are equivalence pairs:

</P>  <H4></H4><UL>            

   <LI> &#8707;aPa @ Q  &#8596;  &#8707;a(Pa @ Q)

   </LI> <LI> &#8704;aPa @ Q  &#8596;  &#8704;a(Pa @ Q)

   </LI> <LI> P @ &#8707;aQa  &#8596;  &#8707;a(P @ Qa)

   </LI> <LI> P @ &#8704;aQa  &#8596;  &#8704;a(P @ Qa)

</LI> </UL> <H4></H4>                

   <P> When a quantifier is in the consequent of a conditional, it works in the same way

</P>  <H4></H4><UL>            

   <LI> P -> &#8707;aQa  &#8596;  &#8707;a(P -> Qa)

   </LI> <LI> P -> &#8704;aQa  &#8596;  &#8704;a(P -> Qa)

</LI> </UL> <H4></H4>                

   <P> However, when it's in the antecedent of a conditional, the equivalences work differently.

</P>  <H4></H4><UL>            

   <LI> &#8707;aPa -> Q  &#8596;  &#8704;a(Pa -> Q)

   </LI> <LI> &#8704;aPa -> Q  &#8596;  &#8707;a(Pa -> Q)  

 </LI> </UL> 
<HR><HR><A NAME="651"></A><H2>651.  Simplification </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Algebra :: Simplification </I> ]</UL>
<H4>Description</H4>                

   <P> All Simplification techniques for the truth-functional operators of Propositional Logic are valid  

 </P>  
<HR><HR><A NAME="652"></A><H2>652.  Normal Forms </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Algebra :: Normal Forms </I> ]</UL>
<H4>Description</H4>                

   <P> All normal forms for Propositional Logic are valid.  

 </P>  
<HR><HR><A NAME="653"></A><H2>653.  Prenex </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Algebra :: Normal Forms :: Prenex </I> ]</UL>
<H4>Description</H4>                

   <P> Contains no quantifiers or all quantifiers are in front.  

 </P>  
<HR><HR><A NAME="654"></A><H2>654.  Calculi </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi </I> ]</UL>
<H4>Description</H4>                

   <P> First order proofs are an extension to predicate logic proofs.  All that is valid for predicate logic proofs is also valid for first-order proofs.  What's listed here are only the extensions to handle the additional concepts of first-order logic.

   </P> <P> It is not valid to mix systems, though each system is equally sound and complete.  

 </P>  
<HR><HR><A NAME="655"></A><H2>655.  Fitch (Intr/Elim) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) </I> ]</UL>
<H4>Description</H4>                

   <P> The Fitch calculus is characterized by the nested sub proofs.

</P>  <H4>Notes</H4>                

   <P> All of the rules of the Fitch calculus for Propositional Logic are valid for that of FOL.  

 </P>  
<HR><HR><A NAME="656"></A><H2>656.  Sequent Notation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Sequent Notation </I> ]</UL>


<H4>Description</H4>                

   <P> A modified sequent notation has been developed to accomodate fitch calculus.

</P>  <H4>See Also</H4><UL>            

   <LI> See <A HREF="#424" TARGET="baseframe">sequent</A>.

   </LI> <LI> See <A HREF="#485" TARGET="baseframe">Propositional Logic Sequent</A>.

   </LI> <LI> See <A HREF="#597" TARGET="baseframe">Predicate Logic Sequent</A>.  

 </LI> </UL> 
<HR><HR><A NAME="657"></A><H2>657.  Constant Object Symbol Introduction </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Sequent Notation :: Constant Object Symbol Introduction </I> ]</UL>
<H4>Description</H4>                

   <P> '[c]' placed in front of a wff in the sequent, means introduce some arbitrary constant object symbol not already active at the current proof step.  If a constant object symbol is introduced within a subproof, then the symbol is only active within that subproof.  Once the subproof terminates, the symbol is no longer active and cannot be used again (unless reintroduced with another use of '[c]'.

</P>  <H4>Examples</H4>                

   <P>   

 </P>  
<HR><HR><A NAME="658"></A><H2>658.  Identity </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Identity </I> ]</UL>
<H4>Description</H4>                

   <P> The identity rules can be reexpressed in terms of FOL.  

 </P>  
<HR><HR><A NAME="659"></A><H2>659.  Elimination (=E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Identity :: Elimination (=E) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> a=c, Pa  |-  Pc//a  

 </LI> </UL> 
<HR><HR><A NAME="660"></A><H2>660.  Introduction (=I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Identity :: Introduction (=I) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8870;  Ax x=x  

 </LI> </UL> 
<HR><HR><A NAME="661"></A><H2>661.  Elimination (&#8704;E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Quantifiers :: Universal :: Elimination (&#8704;E) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8704;xPx  &#8870;  Pc/x<BR>
   where c is any arbitrary constant object symbol (both active and not active symbols are valid)  

 </LI> </UL> 
<HR><HR><A NAME="662"></A><H2>662.  Introduction (&#8704;I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Quantifiers :: Universal :: Introduction (&#8704;I) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> ( [c]  &#8870;  Pc )  &#8870;  &#8704;xPx/c<BR>
   Where c is not in any Assumption or Hypothesis in effect at the line on which '[c]  Pc' occurs.  

 </LI> </UL> 
<HR><HR><A NAME="663"></A><H2>663.  Elimination (&#8707;E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Quantifiers :: Existential :: Elimination (&#8707;E) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8707;xPx, ( [c]  Pc/x  &#8870;  Q )  &#8870;  Q<BR>
   Where c is not in effect outside the subproof.  

 </LI> </UL> 
<HR><HR><A NAME="664"></A><H2>664.  Introduction (&#8707;I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Quantifiers :: Existential :: Introduction (&#8707;I) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> Pc  &#8870;  &#8707;xPx//c  

 </LI> </UL> 
<HR><HR><A NAME="665"></A><H2>665.  Derived Rules </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Derived Rules </I> ]</UL>
<H4>Description</H4>                

   <P> All that holds for Predicatel Logic also holds for First-Order Logic.  

 </P>  
<HR><HR><A NAME="666"></A><H2>666.  Quantifier Exchange (QE) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Inference Rules :: Derived Rules :: Theorems :: Equivalences :: Quantifier Exchange (QE) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8870;  &#172;&#8704;x &#172;F(x)  &#8596;  &#8707;x F(x)

   </LI> <LI> &#8870;  &#172;&#8704;x F(x)  &#8596;  &#8707;x &#172;F(x)

   </LI> <LI> &#8870;  &#8704;x &#172;F(x)  &#8596;  &#172;&#8707;x F(x)

   </LI> <LI> &#8870;  &#8704;x F(x)  &#8596; &#172;&#8707;x &#172;F(x)  

 </LI> </UL> 
<HR><HR><A NAME="667"></A><H2>667.  Varzi </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Variants :: Varzi </I> ]</UL>
<H4>Description</H4>                

   <P> This is the flavor of Fitch calculus presented in 'Schaums Outline to Logic'.

</P>  <H4>Note</H4><UL>            

   <LI> Varzi treats Fitch as the most fundamental calculus from which he derives the traditional ('more abstract') rules (Eg. Modus Ponens, Modus Tollens, the Equivalences).  Once proven.  More abstract proofs can be made using almost exclusively the Natural Deduction rules which produce much more intuative proofs.

   </LI> <LI> This list includes only the rules that differ from those already stated.  

 </LI> </UL> 
<HR><HR><A NAME="668"></A><H2>668.  Universal Introduction (&#8704;I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Variants :: Varzi :: Universal Introduction (&#8704;I) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> [c]  Pc  &#8870;  &#8704;xPx/c<BR>
   Where c is not in any Assumption or Hypothesis in effect at the line on which '[c]  Pc' occurs.  

 </LI> </UL> 
<HR><HR><A NAME="669"></A><H2>669.  Universal Introduction (&#8704;I) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Variants :: Barwise :: Universal Introduction (&#8704;I) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> ( [c]  Pc  &#8870;  Qc )  &#8870;  &#8704;x(Px/c &#8594; Qx/c)<BR>
   Where c is not in effect outside the subproof.

   </LI> <LI> ( [c]        &#8870;  Qc )  &#8870;  &#8704;xQx/c<BR>
   Where c is not in effect outside the subproof.  

 </LI> </UL> 
<HR><HR><A NAME="670"></A><H2>670.  By conclusion form </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Proof Strategies :: By conclusion form </I> ]</UL>
<H4>Description</H4>                

   <P> Many proofs can be solved by examining the form of the conclusion.

   </P> <P> If that fails there are some alternate techniques.  

 </P>  
<HR><HR><A NAME="671"></A><H2>671.  Quantifiers </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Proof Strategies :: Quantifiers </I> ]</UL>
<H4>Description</H4>                

   <P> All four quantifier rules operate only at the leftmost position of a formula, i.e., only on a quantifier whose scope is the whole formula.

   </P> <P> To prove an existentially or univerally quantified conclusion, the typical strategy is first to prove a formula from which this conclusion can be obtained by &#8707;I or &#8704;I.  So, 

<PRE>
To prove:			First Prove:

&#8707;xFx			Fa
&#8704;x(Fx &#8594; Gx)		Fa &#8594; Ga
&#8704;x&#172;Fx			&#172;Fa
&#8704;x&#8707;yFxy		&#8707;yFay
&#8707;yFay			Fab
&#8707;xFxx			Faa
</PRE>

   </P> <P> If the conclusion is in the form of a negation, conjunction, disjunction, conditional or biconditional, then it is usually best to employ the propositional calculus strategy for proving conclusions of that form.  

 </P>  
<HR><HR><A NAME="672"></A><H2>672.  &#8707;(!n)x Px </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Fitch (Intr/Elim) :: Proof Strategies :: Numerical Claims :: &#8707;(!n)x Px </I> ]</UL>
<H4>Description</H4>                

   <P> To prove &#8707;(!n)x Px, you need to prove two things: that there are at leas n such objects, and that there are at most n such objects.  

 </P>  
<HR><HR><A NAME="673"></A><H2>673.  Gensler (Simpl/Infer) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Gensler (Simpl/Infer) </I> ]</UL>
<H4>Description</H4>                

   <P> In the gensler calculus, you assume the premises and hypothesize the negation of the conclusion.  Use the S and I rules to try to reduce all propositions down to atomic formulas and their negations.  If you end up with a proposition P and its negation &#172;P, then you can infer a contradiction and thus the negation of the hypothesis because of RAA.

</P>  <H4>Notes</H4>                

   <P> All of the rules of the Gensler calculus for Propositional Logic are valid for that of FOL.  

 </P>  
<HR><HR><A NAME="674"></A><H2>674.  Quantifier Exchange (QE) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Gensler (Simpl/Infer) :: Quantifier Rules :: Quantifier Exchange (QE) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8870;  &#172;&#8704;xPx  &#8596;  &#8707;x&#172;Px

   </LI> <LI> &#8870;  &#172;&#8707;xPx  &#8596;  &#8704;x&#172;Px  

 </LI> </UL> 
<HR><HR><A NAME="675"></A><H2>675.  Drop Existential (DE) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Gensler (Simpl/Infer) :: Quantifier Rules :: Drop Existential (DE) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8707;xFx  &#8870;  Pa/x<BR>
   Where 'a' is a <I>new</I> object symbol.  

 </LI> </UL> 
<HR><HR><A NAME="676"></A><H2>676.  Drop Universal (DU) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Gensler (Simpl/Infer) :: Quantifier Rules :: Drop Universal (DU) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8704;xFx  &#8870;  Fa/x<BR>
   Where 'a' is an object symbol.  

 </LI> </UL> 
<HR><HR><A NAME="677"></A><H2>677.  Instantiation (UI) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Traditional :: Quantifiers :: Universal :: Instantiation (UI) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8704;xPx  &#8870;  Pc/x<BR>
   Where, c is an arbitrary object symbol.  

 </LI> </UL> 
<HR><HR><A NAME="678"></A><H2>678.  Generalization (UG) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Traditional :: Quantifiers :: Universal :: Generalization (UG) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> Pc  &#8870;  &#8704;xPx/c<BR>
   Where, c denotes "any arbitrarily selected individual".  

 </LI> </UL> 
<HR><HR><A NAME="679"></A><H2>679.  Instantiation (EI) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Traditional :: Quantifiers :: Existential :: Instantiation (EI) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8707;xPx  &#8870;  [c] Pc/x<BR>
   Where c is any individual constant object symbol having no previous occurrence in the context.  

 </LI> </UL> 
<HR><HR><A NAME="680"></A><H2>680.  Generalization (EG) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Traditional :: Quantifiers :: Existential :: Generalization (EG) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> Fc  &#8870;  &#8707;xFx/c<BR>
   Where, c is any individual symbol.  

 </LI> </UL> 
<HR><HR><A NAME="681"></A><H2>681.  Quantifier Interchange </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: First-Order Logic (FOL) :: Inference Theories :: Calculi :: Traditional :: Quantifier Interchange </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8704;xFx  &#8596;  &#172;&#8707;x&#172;Fx

   </LI> <LI> &#172;&#8704;xFx  &#8596;  &#8707;x&#172;Fx

   </LI> <LI> &#8707;xFx  &#8596; &#172;&#8704;x&#172;Fx

   </LI> <LI> &#172;&#8707;xFx &#8596; &#8704;x&#172;Fx  

 </LI> </UL> 
<HR><HR><A NAME="682"></A><H2>682.  Second-Order Logic (SOL) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Higher-Order Logics :: Second-Order Logic (SOL) </I> ]</UL>
<H4>Description</H4>                

   <P> In second-order logic, quantifiers and variables may be used to signify predicates.

</P>  <H4>Examples</H4><UL>            

   <LI> The following argument is deductively valid, but cannot be demonstrated using FOL or an axiomatic extension of FOL

<PRE>
   Carlos and Anita are both Mexican.
   <B>&#8756;</B> Carlos and Anita have something in common.
</PRE>

   </LI> <LI> This is because the 'something' stands for some unknown property predicate.  In Second-Order logic we can introduce a new kind of variable for Predicates and bind them to quantifiers.  Thus, the argument becomes

<PRE>
	Mc &#8743; Ma  &#8870;  &#8707;P(Pc &#8743; Pa)
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Secord-order logic is incomplete meaning that there are cases where given a set of premises p and a conclusion c which is a logical consequence of p, there may not exist a proof of c from p.  

 </LI> </UL> 
<HR><HR><A NAME="683"></A><H2>683.  Lexical Elements </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Higher-Order Logics :: Second-Order Logic (SOL) :: Language :: Syntax :: Lexical Elements </I> ]</UL>
<H4>Description</H4>                

   No new lexical elements are needed.  

  
<HR><HR><A NAME="684"></A><H2>684.  Formation Rules </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Higher-Order Logics :: Second-Order Logic (SOL) :: Language :: Syntax :: Well-Formed Formulas :: Formation Rules </I> ]</UL>
<H4></H4><OL>            

   <LI> Any Proposition Symbol, or n-place Predicate followed by n object symbols, is a WFF.

   </LI> <LI> If &#934; is a wff, so is &#172;&#934;.

   </LI> <LI> If &#934; and &#936; are wffs, so are (&#934; &#8743; &#936;), (&#934; &#8744; &#936;), (&#934; &#8594; &#936;) and (&#934; &#8596; &#936;).

   </LI> <LI> If &#934; is a wff containing an object symbol '&#945;', then any expression of the form &#8704;&#946;&#934;(&#946;/&#945;) or &#8707;&#946;&#934;(&#946;/&#945;) is a wff, where &#934;(&#946;/&#945;) is the result of replacing one or more of the occurrences of '&#945;' in &#934; by some variable '&#946;' not already in &#934;.

   </LI> <LI> If is a wff containing a one-place predicate P, then any formula of the form &#8704;B A(B/P) or &#8707;B A(B/P) is a wff, where A(B/P) is the result of replacing one or more of the occurrences of P in A by some predicate variable B not already in A.
  

 </LI> </OL> 
<HR><HR><A NAME="685"></A><H2>685.  Semantics </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Higher-Order Logics :: Second-Order Logic (SOL) :: Language :: Semantics </I> ]</UL>
<H4>Description</H4>                

   <P> The semantics of this language is like that of FOL, except of course that the truth conditions of wffs involve second-order quantification.  The simplest way to formulate these conditions is patterned after the treatement of first-order quantification.  If M is a model and P a predicate letter, define a P-variant of M to be any model that results from M by freely interpreting P as a subset of the universe of M.  Thus, if M does not assign any interpretation to P, a P-variant will extend M by profiding an interpretation for the additional symbol, P; otherwise it will simply represent one alternative way of interpreting P on the universe of M, keeping everything else exactly as in M.  (Every model that is already defined for P counts as a P-variant of itself.)  The truth conditions for quantified sentences can then be defined as follows:  

 </P>  
<HR><HR><A NAME="686"></A><H2>686.  Universals </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Higher-Order Logics :: Second-Order Logic (SOL) :: Language :: Semantics :: Universals </I> ]</UL>
<H4>Description</H4>                

   <P> A universal quantification &#8704;B A is true in a model M if the wff A(P/B) is true in every P-variant of M, where P is the first predicate letter in the alphabeteic order not occurring in A and A(P/B) is the result of replacing all occurrences of B in A by P; if A(P/B) is false in some P-variant of M, then &#8704;B A is false in M.  

 </P>  
<HR><HR><A NAME="687"></A><H2>687.  Existentials </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Higher-Order Logics :: Second-Order Logic (SOL) :: Language :: Semantics :: Existentials </I> ]</UL>
<H4>Description</H4>                

   <P> An existential quantification &#8707;B A is true in M if the wff A(P/B) is true in some P-variant of M, where P and A(P/B) are as in (1); if A(P/B) is false in every P-variant of M, then &#8707;B A is false in M.  

 </P>  
<HR><HR><A NAME="688"></A><H2>688.  Formal Proofs </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Higher-Order Logics :: Second-Order Logic (SOL) :: Inference Theory :: Calculi :: Fitch :: Inference Rules :: Formal Proofs </I> ]</UL>
Second-order logic includes all the inference rules of the predicate calculus, but the four quantifier rules  of FOL are generalized to permit all four rules for quantifiers binding predicate variables.  the details are fairly straightforward, and we shall not bother to state them precisely here.  We must note, however, that in the application of these inference rules, not only one-place predicates but also open formulas on a single variable are allowed as replacements for predicate variables.  This is because, like one-place predicates, open formulas on a single variable represent properties, though these properties are logically complex.  The example will illustrate.

1.	Fa v &#172;Fa		TI
2.	Ax(Fx v &#172;Fx)	1 AI
3.	EP AxPx		2 EI  

 
<HR><HR><A NAME="689"></A><H2>689.  Identity Predicate </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Higher-Order Logics :: Second-Order Logic (SOL) :: Inference Theory :: Calculi :: Fitch :: Inference Rules :: Identity Predicate </I> ]</UL>
To have the identity predicate in SOL, we need at least one new inference rule to take account of how it interacts with quantification of SOL.  One way to do this is to take Leibniz' Law as an axiom

	&#8704;x&#8704;y(&#8704;P(Px &#8596; Py) &#8596; x=y)

This single axiom is sufficient in place of =I and =E rules of FOL.

Examples:

	&#8870;  &#8704;x x=x

1.  &#8704;x&#8704;y(AP(Px &#8596; Py) &#8596; x=y)	Axiom
2.  &#8704;y(&#8704;P(Pa &#8596; Py) &#8596; a=y)	1 &#8704;E
3.  &#8704;P(Pa &#8596; Pa) &#8596; a=a		2 &#8704;E
4.  Fa -> Fa			TI (provable in PL)
5.  Fa <-> Fa			4,4 &#8596;I
6.  &#8704;P(Pa &#8596; Pa)			5 &#8704;I
7.  &#8704;P(Pa &#8596; Pa) &#8594; a=a		3 &#8596;E
8.  a=a				6,7 &#8594;E
9.  &#8704;x x=x			8 &#8704;I  

 
<HR><HR><A NAME="690"></A><H2>690.  Type Systems </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Higher-Order Logics :: Type Systems </I> ]</UL>
<H4>Description</H4>                

   <P> The name 'second-order logic' originates with a conception of logic as a hierarchial structure describing individuals, properties, properties of properties, and so on.  Any language whose variables range only over individuals (peropl, trees, electrons, nations, or planets, for example) is a first-order language.  Thus the system of FOL is often called the first-order predicate logic , because its variables stand only for individuals.  Now individuals have a variety of properties (such as being Mexican, being human, and being green), and any language whose variables range just over individuals and their properties is called a second-order language.  Moreover, properties themselves may have properties; the property of being Mexican, for example, has the property of being shared by more than two people.  So there are also third-order languages (whose variables range over indifiduals, their properties, and properties of their properties).  Indeed, there are order n languages for any integer n.  An entire array of such languages is called a type system or theory of types.  

 </P>  
<HR><HR><A NAME="691"></A><H2>691.  Incompleteness </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Higher-Order Logics :: Incompleteness </I> ]</UL>
<H4>Description</H4>                

   <P> When SOL properties are interpreted through the set interpretation structure as presented in the interpretation section of FOL, then it can be shown that our second-order rules are not complete and that it is impossible in principle to state a complete set of rules.  It is possible to create alternate interpretation structures in which SOL is complete.

   </P> <P> SOL and indeed all higher-order logics are incomplete because they can be self referencing.  If we have a predicate 'self-predictable' which describes predicates then we can have PG where G is a predicate that is self-predictable.  But then we can ask, is P itself self-predicable?  PP?  Suppose it is not.  Then the property of being non-self-predictable is itself non-self-predictable, and so it is self-predictable.  This is a contradiction, a paradox.

   </P> <P> To avoid such contradictions, russell developed the notion of a type hierarchy in which predicates of higher orders apply only to predicates or objects of lower orders; no predicate may be applied to itself.  

 </P>  
<HR><HR><A NAME="692"></A><H2>692.  New Quantifiers </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: New Quantifiers </I> ]</UL>
<H4>Description</H4>                

   <P> See Deduction:First-Order Predicate Logic With Identity:The Language:Hints on Formalization:Numerical Statements  

 </P>  
<HR><HR><A NAME="693"></A><H2>693.  Adding </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: New Quantifiers :: Adding </I> ]</UL>
<H4>Description</H4>                

   <P> Any new quantifier Q added to the language requires that a WFF have the form Q(A,B); which says, "Q x satisfying A satisfy B" or, "Q A's are B's".  

   </P> <P> So, if Q is 'most', then<BR>
	- Most x satisfying A, satisfy B.<BR>
	- Most A's are B's.

   </P> <P> Alternatively, if some P applies to 'most' objects in the universe of discourse then Qx Px is possible; which is just shorthand for Qx(x=x, Px).  

 </P>  
<HR><HR><A NAME="694"></A><H2>694.  Conservativity </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: New Quantifiers :: Logic :: Conservativity </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8870; Qx( Ax, Bx )  &#8596;  Qx( Ax, Ax &#8743; Bx )  

 </LI> </UL> 
<HR><HR><A NAME="695"></A><H2>695.  Monotonicity </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: New Quantifiers :: Logic :: Monotonicity </I> ]</UL>
<H4>Description</H4>                

   <P> Has to do with what happens when we increase or decrease the set B of things.  Many determiners are increasing, several are decreasing, but some are neither.  

 </P>  
<HR><HR><A NAME="696"></A><H2>696.  ... increasing </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: New Quantifiers :: Logic :: Monotonicity :: ... increasing </I> ]</UL>
<H4>Description</H4>                

   <P>Q is monotone increasing provided: for all A, B and B', the following argument holds:

<PRE>
   Qx(Ax, Bx), &#8704;x(Bx &#8594; B'x)  &#8870;  Qx(Ax, B'x)

   meaning, if Q(A,B) and you increase B to a larger set B', then Q(A,B').
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
Q cubes are small and in the same row as c.
<B>&#8756;</B> Q cubes are small.
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="697"></A><H2>697.  ... decreasing </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: New Quantifiers :: Logic :: Monotonicity :: ... decreasing </I> ]</UL>
<H4>Description</H4>                

   <P> Q is monotone increasing provided: for all A, B and B', the following argument holds:

<PRE>
Qx(Ax, B'x), &#8704;x(Bx &#8594; B'x)  &#8870;  Qx(Ax, Bx)
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
Q cubes are small.
<B>&#8756;</B> Q cubes are small and in the same row as c.
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="698"></A><H2>698.  Persistence </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: New Quantifiers :: Logic :: Persistence </I> ]</UL>
<H4>Description</H4>                

   <P> Similar to monotonicity, What happens when we increase/decrease the A sentence.  

 </P>  
<HR><HR><A NAME="699"></A><H2>699.  Persistent </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: New Quantifiers :: Logic :: Persistence :: Persistent </I> ]</UL>
<H4>Description</H4>                

   <P> Q is persistent provided: for all A, A' and B, the following argument holds:

<PRE>
Qx(Ax, Bx), &#8704;x(Ax &#8594; A'x)  &#8870;  Qx(A'x, Bx)

meaning, if Q(A,B) and you increase A to a larger set A', then Q(A',B).
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
Q small cubes are left of b.
<B>&#8756;</B> Q cubes are left of b.
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="700"></A><H2>700.  Anti-persistent </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: New Quantifiers :: Logic :: Persistence :: Anti-persistent </I> ]</UL>
<H4>Description</H4>                

   <P> Q is persistent provided: for all A, A' and B, the following argument holds:

<PRE>
   Qx(A'x, Bx), &#8704;x(Ax &#8594; A'x)  &#8870;  Qx(Ax, Bx)

   meaning, if Q(A,B) and you increase A to a larger set A', then Q(A',B).
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
Q cubes are left of b.
<B>&#8756;</B> Q small cubes are left of b.
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="701"></A><H2>701.  Syntax </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Augmented Symbologies :: Functions :: Syntax </I> ]</UL>
<H4>Notation</H4>                

   <P> fx, fxy, where f is a function symbol and x,y,etc are object symbols.

</P>  <H4>Alternate Notations</H4><UL>            

</UL> <H4>Semantics</H4>                

   <P> Illustrates the internal structure of an object.  A function can be thought of as an expression that evaulates to some object.

</P>  <H4>Examples</H4><UL>            

   <LI> fj or father(john),	father of john, john's father

   </LI> <LI> color(car),		the color of the car

   </LI> <LI> on(book,table),	the book on the table

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> In a proposition, a function is used just like an object.

   </LI> <LI> Functions must operate exactly like mathematical functions.  Each possible value that is passed to a function must result in exactly one referrent.  That is, functions must describe one-to-one or many-to-one mappings.  For example, SonOf(x) is not a function, because x may map to multiple people (if x has more than one son) or it may map to nobody (if x doesn't have any sons).

   </LI> <LI> When using function symbols it is important to specify the domain (i.e. the set of individuals to which the function symbol applies).  

 </LI> </UL> 
<HR><HR><A NAME="702"></A><H2>702.  Inference Theory </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Augmented Symbologies :: Functions :: Inference Theory </I> ]</UL>
<H4>Description</H4>                

   <P> To accommodate function symbols in the predicate calculus with identity, we need make only monor modifications in the formation rules and inference rules.  The new formation rules allow functional expressions to occur anywhere names can occur, and the rules =I, =E, &#8704;E and &#8707;I may now be performed with functional expressions in addition to names.  (However, if we want any of our function symbols to have fixed interpretations, then we must add special rules or axioms for them).

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
Dictionary:

   Sx,	x is a sinner
   fx,	the father of x

      &#8704;x(Sf(x) &#8594; Sx)  &#8870;  &#8704;x(Sf(f(x)) &#8594; Sx)

1.  &#8704;x(Sf(x) &#8594; Sx)		A
2.  Sf(f(a)) &#8594; Sf(a)		1 &#8704;E (x replaced by f(a)
3.  Sf(a) &#8594; Sa		1 QE (x replaced by a)
4.  Sf(f(x)) &#8594; Sa		2,3 HS
5.  &#8704;x(Sf(f(x)) &#8594; Sx)	4 &#8704;I
</PRE>

   </LI> <LI> <PRE>
Dictionary:

   fx,	some one place function symbol

      &#8870;  &#8704;x&#8707;y(y=f(x) &#8743; &#8704;z(z=f(x) &#8594; z=y))

1.  f(a)=f(a)			=I
2.  b=f(a) &#8594; b=f(a)			TI (can the proven as a theorem)
3.  &#8704;z(z=f(a) &#8594; z=f(a))		2 &#8704;I
4.  f(a)=f(a) &#8743; &#8704;z(z=f(a) &#8594; z=f(a))	1,3 &#8743;I
5.  &#8707;y(y=f(a) &#8743; &#8704;z(z=f(a) &#8594; z=y))	4 &#8707;I
6.  &#8704;xEy(y=f(x) &#8743; &#8704;z(z=f(x) &#8594; z=y))	5 &#8704;I
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="703"></A><H2>703.  Formal Definitions </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Augmented Symbologies :: Formal Definitions </I> ]</UL>
<H4>Description</H4>                

   <P> A formal definition is simply a definition which can be used in a proof.  

 </P>  
<HR><HR><A NAME="704"></A><H2>704.  Syntax </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Augmented Symbologies :: Formal Definitions :: Syntax </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> a &#8797; b<BR>
   a 'is defined by' b

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> The simplest sort of formal definition is direct replacement of one expression by another, generally shorter expression.   More complicated, but also more useful, are <I>contextual definitions</I>, or definitions in use.  In a contextual definition, a symbol is defined not by substituting it for other symbols, but by showing how entire formulas in which it occurs can be systematically translated into formulas in which it does not occur.

   </LI> <LI> Contextual definitions have their limitations.  The above definition enables us to introduce '-' (flanked by names, variables, or   functional expressions) only on the left side of an identity predicate.  It does not allow us to use '-' in other contexts.  It does not, for example, permit us to write '1=(3-2)', where '-' appears to the right of '='.  For that we need the parallel definition:
<UL>
	z = (x - y) &#8797; (y + z) = x
</UL>
   </LI> <LI> Even with this new definition, there are contexts in which we may not write '-', such as in the expression 's(0 - 0) = s0', where '-' occurs within a functional expression.  INtroducing '-' into functional expressions presents special problems, since whenever x is less than y, (x - y) is not a nonnegative integer and hence is not in the domain.  Definitions to handle such cases must therefore include certain restrictions.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> &#8869; &#8797; P ^ &#172;P<BR>
   The definition of the contradiction symbol

   </LI> <LI> (x - y) = z &#8797; (y + z) = x<BR>
   Use a contextual definition to definine '-' in Peano Arithmetic.  (demonstrated in Mathematical Induction (Varzi)).  

 </LI> </UL> 
<HR><HR><A NAME="705"></A><H2>705.  Inference Theory </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Augmented Symbologies :: Formal Definitions :: Inference Theory </I> ]</UL>
<H4>Description</H4>                

   <P> In proofs no new inference rule needs to be introduced.  Here's how it works.

<PRE>
Given a definition

	P &#8797; Q

If the proof has a wff that contains an instance of one side of the definition, say 'P', we may infer a new line which consists of the corresponding wff such that the instance of P is replaced by Q.  The justification is simply a citation of the first line and the reason is:  <definition name>.

	...
	n.	P &#8743; S	<justification>
	...
	m.	Q &#8743; S	n <definition name>
	...	
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
Given the defintion

	D1:   (x - y) = z &#8797; (y + z) = x

Prove:

	&#8870;  Ax(x - 0) = x

1.   Ax(0 + x) = x	TI (Provable as a theorem in Peano Arithmetic)
2.   Ax(x - 0) = x	1 D1
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="706"></A><H2>706.  Definite Descriptions </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Augmented Symbologies :: Definite Descriptions </I> ]</UL>
<H4>Description</H4>                

   <P> Definite descriptions are expressions which purportedly denote a single object by enumerating properties which uniquely identify it.

   </P> <P> In english they are usually phrases beginning with the definite article 'the'. 

</P>  <H4>Examples</H4>                

   <P> The current presidend of the United States

   </P> <P> The summit of Mount Everest

   </P> <P> The steel tower in Paris

   </P> <P> Isaac's father  

 </P>  
<HR><HR><A NAME="707"></A><H2>707.  Syntax </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Augmented Symbologies :: Definite Descriptions :: Syntax </I> ]</UL>
<H4>Notation</H4>                

   <P> Russel introduced a new symbol, &#953;.  This symbol can be thought of as a new quantifier 'the', 'there exists exactly one'.  However, such expressions are not propositions.  Grammatically, they are names just like function expressions.

</P>  <H4>Examples</H4><UL>            

   <LI> using Fxy to mean, 'x is the father of y':
<PRE>
   The father of Arnold
      &#953;xFxa

   Arnold's father was a prophet.
      P&#953;xFxa
</PRE>

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Definite descriptions, like functional expressions, act grammatically like names.

   </LI> <LI> All formalized definite descriptions have the form '&#953;xFx', where 'x'x may be replaced by any variable and 'Fx' by any open sentence on that variable.  Thus, the simplest statements containing definition descriptions all have the form 'G&#953;xFx', where G may be replaced by any one-place predicate.  This may be read as "The F is G."  The statements 'The father of Issac was a prophet' and 'The steel tower in Paris is magnificent' both have this form.  

 </LI> </UL> 
<HR><HR><A NAME="708"></A><H2>708.  Inference Theory </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Augmented Symbologies :: Definite Descriptions :: Inference Theory </I> ]</UL>
<H4>Description</H4>                

   <P> No new axioms or inference rules need to be added to FOL to support Definite Description.  We can simply use the Formal Definition of Definite Descriptions.  

 </P>  
<HR><HR><A NAME="709"></A><H2>709.  Formal Definition of </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Augmented Symbologies :: Definite Descriptions :: Formal Definition of </I> ]</UL>
<H4>Description</H4>                

   <P> Russell's theory of definite descriptions takes as its starting point the observation that definite singular NP's can occur in two basic contexts:

</P>  <H4></H4><OL>            

   <LI> The P is a Q

   </LI> <LI> The P exists

</LI> </OL> <H4></H4>                

   <P> Sentences of the form (1) are understood as saying:

<PRE>
      There is exactly one P, and it is Q.
      &#8707;x((Px &#8743; &#8704;y(Py &#8594; x=y)) &#8743; Qx)
</PRE>

   </P> <P> While, those of form (2) can be understood as:

<PRE>
      &#8707;x( Px &#8743; &#8704;y(Py &#8594; y=x))
</PRE>

   </P> <P> Thus, Russell's 'i' may be treated by formal defintion:

<PRE>
   GixFx &#8797; &#8707;x((Fx &#8743; &#8704;y(Fy &#8594; x=y)) &#8743; Gx)
   Where, 'G' is any one-place predicate.
</PRE>  

 </P>  
<HR><HR><A NAME="710"></A><H2>710.  Problems </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Logics :: Augmented Symbologies :: Definite Descriptions :: Problems </I> ]</UL>
<H4>Description</H4>                

   <P> A number of interesting complexities and ambiguities arise when definite descriptions occur in contexts more complicated than the subject position of a one-place predicate.  For these cases, the definition of definite descriptions must be generalized to allow 'GixFx' to stand for any wff containing a single occurrence of 'ixFx'.  

 </P>  
<HR><HR><A NAME="711"></A><H2>711.  Frege-Russelle Extension Logics </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics </I> ]</UL>
<H4>Description</H4>                

   <P> These are logics which are still classical in that they have all the properties of a classical logic but the study concepts other than truth-functional operators and quantifiers.  Typically these logics are used as extensions to some Frege-Russelle logic.  

 </P>  
<HR><HR><A NAME="712"></A><H2>712.  Modal Logics </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics </I> ]</UL>
<H4>Description</H4>                

   <P> Modal logics study pairs of complementary concepts of modality.  The most commonly studied Modal Logics include alethic (which studies necessity), epistemic (knowledge), deontic (obligations), temporal (time) and quantified (first-order logic).

</P>  <H4>Concepts</H4>                

   <P> In the following table, a and b represent the two concepts of the modal logic; c and d represent their respective negated forms.

<PRE>
		a		b		c		d
Alethic		necessary		possible		contingent		impossible
Existential		universal		existing		partial		empty
Epistemic		verified		unfalsified		undecided	falsified
Deontic		obligatory		permitted		indifferent		forbidden
</PRE>
  

 </P>  
<HR><HR><A NAME="713"></A><H2>713.  The Generalized Theory </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: The Generalized Theory </I> ]</UL>
<H4>Description</H4>                

   <P> The modal logics are so similar to each other in syntax and inference, that they are often studied collectively.  

 </P>  
<HR><HR><A NAME="714"></A><H2>714.  Attributes </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: The Generalized Theory :: Attributes </I> ]</UL>
<H4>Description</H4>                

   <P> Modal logics share three characteristics.

</P>  <H4>Attributes</H4><OL>            

   <LI> A modal logic studies a pair of operators (we'll use &#966; and &#968; for these operators).

   </LI> <LI> <B>Syntax</B><BR>
   The operators work as prefix operators.  If &#920; is a proposition and &#966; is a modal operator, then &#966;&#920; is a modal proposition. 

   </LI> <LI> <B>Semantics</B><BR>
   The operators are semantically related; loosely analogous to the relationship expressed by the pair: 'total' and 'partial'.

   </LI> <LI> the operators are non-truth-functional.  That is:<BR>
   the truth-value of &#966;P cannot be determined only by knowing that of P.
   the truth-value of &#968;P cannot be determined only by knowing that of P.
   
   </LI> <LI> the operators are logically equatable:
<UL>
   &#172;&#966;P  &#8596;  &#968;&#172;P<BR>
   &#172;&#968;P  &#8596;  &#966;&#172;P
</UL>

</LI> </OL> <H4>Notes</H4><UL>            

   <LI> &#8704; and &#8707; are a modal pair.

   </LI> <LI> This section covers the modal operators commonly studied by Logicians, however, many more still can be found.  Following are some interesting examples:

<PRE>
   - Modal Pair:  (AxP, TxP)  (x accepts P, x tolerates P)
   - Modal Pair:  (LP, LP) (P is likely, P is likely)
</PRE>

   </LI> <LI> This second example is interesting, because it's really only one operator.  However, it fulfuills all the criteria expressed above.  

 </LI> </UL> 
<HR><HR><A NAME="715"></A><H2>715.  Modal Quantified Logic </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Quantified Logic </I> ]</UL>
<H4>Description</H4>                

   This is simply the quantification theory presented as First-Order Logic.  It's primarily here for completeness.  

  
<HR><HR><A NAME="716"></A><H2>716.  Language </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Quantified Logic :: Language </I> ]</UL>
<H4>Description</H4>                

   <P> See First-Order Logic.  

 </P>  
<HR><HR><A NAME="717"></A><H2>717.  Modal Alethic Logic </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic </I> ]</UL>
<H4>Alternate Names</H4>                

   <P> Modal Logic

   </P> <P> Subjunctive Logic

</P>  <H4>Description</H4>                

   <P> Alethic logic studies arguments whose validity depends on 'necessary', 'possible' and similar notions.  

 </P>  
<HR><HR><A NAME="718"></A><H2>718.  Syntax </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax </I> ]</UL>
<H4>Description</H4>                

   <P> Modal logic extends FOL with the addition of two new symbols.  

 </P>  
<HR><HR><A NAME="719"></A><H2>719.  Necessitation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Lexical Elements :: Necessitation </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> &#9633;&#934;

</LI> </UL> <H4>Alternate Notations</H4><UL>            

   <LI> N:&#934;

</LI> </UL> <H4>Description</H4>                

   <P> It's necessarily the case that &#934;.  &#934; is necessary.  &#934; is true in all possible worlds.  

 </P>  
<HR><HR><A NAME="720"></A><H2>720.  Possibility </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Lexical Elements :: Possibility </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> &#9671;&#934;

</LI> </UL> <H4>Alternate Notations</H4><UL>            

   <LI> P:&#934;

</LI> </UL> <H4>Description</H4>                

   <P> It's possible that &#934;.  &#934; is possible.  We can imagine a world in which &#934; is true.  

 </P>  
<HR><HR><A NAME="721"></A><H2>721.  Formation Rules </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Well-Formed Formulas (WFFs) :: Formation Rules </I> ]</UL>
<H4></H4><OL>            

   <LI> If &#934; is a wff, so are &#9633;&#934; and &#9671;&#934;.

</LI> </OL> <H4>Notes</H4><UL>            

   <LI> This rule may be added to the set of formation rules for any other logic.  

 </LI> </UL> 
<HR><HR><A NAME="722"></A><H2>722.  "A is impossible" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "A is impossible" </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;&#9671;A

   </LI> <LI> &#9633;&#172;A  

 </LI> </UL> 
<HR><HR><A NAME="723"></A><H2>723.  "A is consistent with B" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "A is consistent with B" </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#9671;(A &#8743; B)<BR>
   it's possible that both A and B.  

 </LI> </UL> 
<HR><HR><A NAME="724"></A><H2>724.  "A entails B", "It's necessary that if A then B" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "A entails B", "It's necessary that if A then B" </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#9633;(A &#8594; B)  

 </LI> </UL> 
<HR><HR><A NAME="725"></A><H2>725.  "A is inconsistent with B" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "A is inconsistent with B" </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;&#9671;(A &#8743; B)<BR>
   It's not possible that A and B are both true.  

 </LI> </UL> 
<HR><HR><A NAME="726"></A><H2>726.  "A doesn't entail B" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "A doesn't entail B" </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;&#9633;(A &#8594; B)<BR>
   It's not necessary that if A then B.  

 </LI> </UL> 
<HR><HR><A NAME="727"></A><H2>727.  "A is a contingent statement" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "A is a contingent statement" </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#9671;A &#8743; &#9671;&#172;A<BR>
   A is possible and not-A is possible.  

 </LI> </UL> 
<HR><HR><A NAME="728"></A><H2>728.  "A is a contingent truth" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "A is a contingent truth" </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> A &#8743; &#9671;&#172;A
   A is true but could have been false.  

 </LI> </UL> 
<HR><HR><A NAME="729"></A><H2>729.  "If A, then it's necessary that B" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "If A, then it's necessary that B" </I> ]</UL>
<H4>Description</H4>                

   <P> This statement is ambiguous.  There are two possible interpretations.

</P>  <H4>Forms</H4><UL>            

   <LI> A &#8594; &#9633;B  (called, the <I>necessity of the consequent</I>)

   </LI> <LI> &#9633;(A &#8594; B)  (called, the <I>necessity of the consequence</I>)

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> (of first form) If you're a bachelor, then you're inherently unmarried.  (in no possible would would anyone every marry you)

   </LI> <LI> (of second form) It's necessary that if you're a bachelor then you're unmarried.  (This is trivially true)  

 </LI> </UL> 
<HR><HR><A NAME="730"></A><H2>730.  "If A, then it's impossible that B" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "If A, then it's impossible that B" </I> ]</UL>
<H4>Description</H4>                

   <P> This statement is ambiguous.  There are two possible interpretations.

</P>  <H4>Forms</H4><UL>            

   <LI> A &#8594; &#9633;-B

   </LI> <LI> &#9633;(A &#8594; &#172;B)  

 </LI> </UL> 
<HR><HR><A NAME="731"></A><H2>731.  "Necessary not ..." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "Necessary not ..." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#9633;&#172;P  

 </LI> </UL> 
<HR><HR><A NAME="732"></A><H2>732.  "not necessary..." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "not necessary..." </I> ]</UL>
<H4>Forms</H4><UL>            

   &#172;&#9633;P  

 </UL> 
<HR><HR><A NAME="733"></A><H2>733.  "necessary if..." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "necessary if..." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#9633;(P &#8594; Q)  

 </LI> </UL> 
<HR><HR><A NAME="734"></A><H2>734.  "if necessary..." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "if necessary..." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> (&#9633;P &#8594; Q)  

 </LI> </UL> 
<HR><HR><A NAME="735"></A><H2>735.  "If A then B (by itself) is necessary" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "If A then B (by itself) is necessary" </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> A &#8594; &#9633;B  

 </LI> </UL> 
<HR><HR><A NAME="736"></A><H2>736.  "Necessarily, if A then B" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "Necessarily, if A then B" </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#9633;(A &#8594; B)  

 </LI> </UL> 
<HR><HR><A NAME="737"></A><H2>737.  "If A then B" is a necessary truth. </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "If A then B" is a necessary truth. </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#9633;(A &#8594; B)  

 </LI> </UL> 
<HR><HR><A NAME="738"></A><H2>738.  "A is self-contradictory" </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "A is self-contradictory" </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;&#9671;A  

 </LI> </UL> 
<HR><HR><A NAME="739"></A><H2>739.  F is a necessary/essential property of x </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: F is a necessary/essential property of x </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#9633;Fx  

 </LI> </UL> 
<HR><HR><A NAME="740"></A><H2>740.  x has the necessary property of being F </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: x has the necessary property of being F </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#9633;Fx  

 </LI> </UL> 
<HR><HR><A NAME="741"></A><H2>741.  x is necessarily F </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: x is necessarily F </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#9633;Fx  

 </LI> </UL> 
<HR><HR><A NAME="742"></A><H2>742.  In all possible worlds, x would be F </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: In all possible worlds, x would be F </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#9633;Fx  

 </LI> </UL> 
<HR><HR><A NAME="743"></A><H2>743.  F is a contingent/accidental property of x </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: F is a contingent/accidental property of x </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> Fx &#8743; &#9671;&#172;Fx  

 </LI> </UL> 
<HR><HR><A NAME="744"></A><H2>744.  x is F but could have lacked F </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: x is F but could have lacked F </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> Fx &#8743; &#9671;&#172;Fx  

 </LI> </UL> 
<HR><HR><A NAME="745"></A><H2>745.  x is contingently F </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: x is contingently F </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> Fx &#8743; &#9671;&#172;Fx  

 </LI> </UL> 
<HR><HR><A NAME="746"></A><H2>746.  In the actual world s if F; but in some possible world x isn't F </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: In the actual world s if F; but in some possible world x isn't F </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> Fx &#8743; &#9671;&#172;Fx  

 </LI> </UL> 
<HR><HR><A NAME="747"></A><H2>747.  All A are necessarily B </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: All A are necessarily B </I> ]</UL>
<H4>Description</H4>                

   <P> This statement is ambiguous.  There are two possible interpretations.

</P>  <H4>Forms</H4><UL>            

   <LI> &#8704;x(Px &#8594; &#9633;Px) (called, <I>de re necessity</I>).

   </LI> <LI> &#9633;&#8704;x(Px &#8594; Px) (called, <I>de dicto necessity</I>).

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> Of the first form:<BR>
   Everyone who in fact is a person has the necessary property of being a person.

   </LI> <LI> Of the second form:<BR>
   It's necessary that all persons are persons

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> This proposition is trivially true.  

 </LI> </UL> 
<HR><HR><A NAME="748"></A><H2>748.  The so-and-so </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: The so-and-so </I> ]</UL>
Ambiguous


<H4>Description</H4>                

   <P> This statement is ambiguous.  The best way to see the ambiguity, is to view an example.  

<PRE>
   The number I'm thinking of is odd.

      Which seems to translate to:

   &#9633;On
</PRE>

   </P> <P> However, this has two interpretations.

</P>  <H4></H4><UL>            

   <LI> This is necessary: "I'm thinking of just one number and it's odd."

   </LI> <LI> I'm thinking of just one number, and it has the necessary property of being odd."

</LI> </UL> <H4></H4>                

   <P> The first form is false, since I minght be thinking of something other than a number, or more than one number.  The second might be true.

   </P> <P> This ambiguity is solved using definite descriptions.

<PRE>
	&#9633;&#8707;x((Tx &#8743; &#172;&#8707;y(&#172;x=y &#8743; Ty)) &#8743; Ox)
	&#8707;x((Tx &#8743; &#172;&#8707;y(&#172;x=y &#8743; Ty)) &#8743; &#9633;Ox)
</PRE>  

 </P>  
<HR><HR><A NAME="749"></A><H2>749.  Semantic Problems </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Syntax :: Formalization Hints :: Problems :: Semantic Problems </I> ]</UL>
<H4>Description</H4>                

   <P> The naive system assumes that the same entities exist in all possible worlds.  

 </P>  
<HR><HR><A NAME="750"></A><H2>750.  Interpretation Structures </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Semantics :: Interpretation Structures </I> ]</UL>
<H4>Description</H4>                

   <P> The modal operators are not truth-functional; we cannot always determine the truth value of a sentence of the form '&#9633;P' or '&#9671;P' simply from the truth value of 'P'.

</P>  <H4>Examples</H4>                

   <P> S, this stream is polluted.
   <PRE>
   &#9633;S<BR>
   &#9633;&#172;S
</PRE>
   These are both false.  Neither condition is necessary.  The condition of the stream is a contingent fact; it is not bound to be one way or the other.  Thus the operator '&#9633;' may produce a false sentence when preficed either to a flase sentence or to a true sentence.  However,
   <PRE>
   &#9671;S
   &#9671;&#172;S
</PRE>
   These are both true.  Both conditions are possible.  Hence, again we cannot determine the truth value of the modal statement solely from the truth value of its nonmodal component.

</P>  <H4>Notes</H4><UL>            

   <LI> There are two exceptions to the statement that the modal version of P is not determinable from P.  If 'P' is true, then '&#9671;P' is certainly true,since if something is actually the case it is clearly possible.  Also, if 'P' is false, then '&#9633;P' is false, since what is not the case is certainly not necessary.  

 </LI> </UL> 
<HR><HR><A NAME="751"></A><H2>751.  Possible Worlds </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Semantics :: Interpretation Structures :: Possible Worlds </I> ]</UL>
<H4>Description</H4>                

   <P> 'Possible worlds' is a phrase of some controversy.  Here we shall use it to refer to any conceivable state of affairs, any one of the many ways things could have been besides the way they actually are.)  

 </P>  
<HR><HR><A NAME="752"></A><H2>752.  &#9671;P </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Semantics :: Interpretation Structures :: &#9671;P </I> ]</UL>
<H4>Description</H4>                

   <P> '&#9671;P' is true if and only if 'P' is true in at least one possible world.  Since the actual world (the universe) is a possible world, if 'P' is true in the actual world, that makes '&#9671;P' true.  but '&#9671;P' may be true even if 'P' is false in the actual world, provided only that 'P' is true in some possible world.  

 </P>  
<HR><HR><A NAME="753"></A><H2>753.  &#9633;P </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Language :: Semantics :: Interpretation Structures :: &#9633;P </I> ]</UL>
<H4>Description</H4>                

   <P> '&#9633;P' is true if and only if 'P' is true in all possible worlds.  thus, if 'P' is false in the actual world, '&#9633;P' is false.  But if 'P' is true in the actual world, '&#9633;P' may be either true or false, depending upon whether or not 'P' is true in all the other possible worlds as well.  

 </P>  
<HR><HR><A NAME="754"></A><H2>754.  Calculi </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi </I> ]</UL>
<H4>Description</H4>                

   <P> All of the calculus techniques for Modal logic are extensions to those of FOL.  

 </P>  
<HR><HR><A NAME="755"></A><H2>755.  S5 </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Varzi :: S5 </I> ]</UL>
<H4>Description</H4>                

   <P> This version of modal logic is based upon the semantics expressed by C. I. Lewis' S5 modal language.  

 </P>  
<HR><HR><A NAME="756"></A><H2>756.  Axiom Schemas </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Varzi :: S5 :: Axiom Schemas </I> ]</UL>
<H4></H4><UL>            

   <LI> AS1.  &#9671;P &#8596; &#172;&#9633;&#172;P

   </LI> <LI> AS2.  &#9633;(P &#8594; Q) &#8594; (&#9633;P &#8594; &#9633;Q)

   </LI> <LI> AS3.  &#9633;P &#8594; P

   </LI> <LI> AS4.  &#9671;P &#8594; &#9633;&#9671;P  

 </LI> </UL> 
<HR><HR><A NAME="757"></A><H2>757.  Necessitation (Inference Rule N) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Varzi :: S5 :: Necessitation (Inference Rule N) </I> ]</UL>
<H4>Description</H4>                

   <P> If P has been proved as a theorem, then we may infer N:P.  

 </P>  
<HR><HR><A NAME="758"></A><H2>758.  &#8870;  P &#8594; &#9671;P </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Varzi :: S5 :: theorems :: &#8870;  P &#8594; &#9671;P </I> ]</UL>
<PRE>
1.   &#9633;&#172;P &#8594; &#172;P	AS3
2.   &#172;&#172;P &#8594; &#172;&#9633;&#172;P	1 TRANS
3.   P &#8594; &#172;&#9633;&#172;P	2 DN
4.   &#9671;P &#8596; &#172;&#9633;&#172;P	AS1
5.   &#172;&#9633;&#172;P &#8594; &#9671;P	4 <->E
6.   P -> &#9671;P	3,5 HS
</PRE>  

 
<HR><HR><A NAME="759"></A><H2>759.  Gensler </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler </I> ]</UL>
<H4>Description</H4>                

   <P> Gensler's technique is to use a world notation.  A line of a proof preceded by Wn: (where n is some number) represents a particular (alternate) world.  Proof lines not preceded by Wn: represent the actual world (our world), which we may also refer to as 'W0' (world zero).  It is not valid to infer things from premises in two different worlds.  E.g.  if our world has 'P -> Q' and W3 has 'P', we cannot infer 'Q'.  The new inference rules tell us when we can introduce or eliminate new worlds .  

 </P>  
<HR><HR><A NAME="760"></A><H2>760.  Reverse Squiggle (RS) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler :: Inference Rules :: Reverse Squiggle (RS) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;&#9633;&#934;  &#8870;  &#9671;&#172;&#934;

   </LI> <LI> &#172;&#9671;&#934;  &#8870;  &#9633;&#172;&#934;  

 </LI> </UL> 
<HR><HR><A NAME="761"></A><H2>761.  &#9671; Elimination (&#9671;E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler :: Inference Rules :: &#9671; Elimination (&#9671;E) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#9671;&#934;   &#8870;   [Wn:]  &#934;  

 </LI> </UL> 
<HR><HR><A NAME="762"></A><H2>762.  &#9633; Elimination (&#9633;E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler :: Inference Rules :: &#9633; Elimination (&#9633;E) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#9633;&#934;   &#8870;   Wn:  &#934;<BR>
   That is, since &#9633;&#934; means that &#934; is true in all worlds, &#934; may be brought into ANY world including the actual world (W0:).  

 </LI> </UL> 
<HR><HR><A NAME="763"></A><H2>763.  Systems </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler :: Systems </I> ]</UL>
The various systems are focused on the meaning of N: and the way that we can use 'Drop Box'.  Gensler's basic system (S5) says that for any proposition N:P, P is true in all worlds (it cannot be false).  Thus, if we have N:P in this or any other world, we may apply the rule 'Drop Box' and move P into any world.  Other systems only allow use to go from N:P to P for suitably related world.  For Gensler, this means we need a 'travel ticket'.

A travel ticket is dispensed by 'drop diamond'.  When we drop a diamond we move from one world to another, which we may notate as:  'W1 => W2'.  Each system dictates how we may use the travel ticket, all the valid ways that we can use drop box.  

 
<HR><HR><A NAME="764"></A><H2>764.  T (One Way Ticket) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler :: Systems :: T (One Way Ticket) </I> ]</UL>
<H4>Description</H4>                

   <P> Given a travel ticket from 'W1 => W2'.  We can use 'drop box' to go one way.from &#9633;&#934; in W1 to &#934; in W2, and nothing more.  We can also drop box into the current world:  &#9633;&#934; in W1 to &#934; in W1.  

 </P>  
<HR><HR><A NAME="765"></A><H2>765.  B (Either Way Ticket) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler :: Systems :: B (Either Way Ticket) </I> ]</UL>
<H4>Description</H4>                

   <P> Given a travel ticket from 'W1 => W2'.  We can use 'drop box' to go in either direction.  From &#9633;&#934; in W1 to &#934; in W2; or from &#9633;&#934; in W2 to &#934; in W1.  We can also drop box into the current world:  &#9633;&#934; in W1 to &#934; in W1.  

 </P>  
<HR><HR><A NAME="766"></A><H2>766.  S4 (Series of Tickets) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler :: Systems :: S4 (Series of Tickets) </I> ]</UL>
<H4>Description</H4>                

   <P> Given a travel ticket from 'W1 => W2', or a sequence of travel tickets that go from W1 to W2.  We can use 'drop box' to go in one direction.  From &#9633;&#934; in W1 to &#934; in W2.  We can also drop box into the current world:  &#9633;&#934; in W1 to &#934; in W1.  

 </P>  
<HR><HR><A NAME="767"></A><H2>767.  S5 (Ticket to any world) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler :: Systems :: S5 (Ticket to any world) </I> ]</UL>
<H4>Description</H4>                

   <P> We can drop box into any world, including this world.  

 </P>  
<HR><HR><A NAME="768"></A><H2>768.  Construction </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler :: Proofs of Nonconsequence :: Construction </I> ]</UL>
<H4>Description</H4>                

   <P> Construct a proof of refutation as instructed in the section on proofs of refutation in FOL.  If necessary, break each &#9671;A proposition into a possible world Wn: (See:  Gensler's inference rules for FOL).

   </P> <P> List each atomic proposition for each possible world (including this world).  

 </P>  
<HR><HR><A NAME="769"></A><H2>769.  Interpretation </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler :: Proofs of Nonconsequence :: Interpretation </I> ]</UL>
<H4>Description</H4>                

   <P> For each premise or conclusion of the form:

<PRE>
&#9671;A

	is true if and only if A is true in least one world (including this one).

&#9633;A

	if true if and only if A is true in every world (including this one).
</PRE>  

 </P>  
<HR><HR><A NAME="770"></A><H2>770.  &#9671;A  &#8870;  &#9633;A </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler :: Proofs of Nonconsequence :: Examples (Gensler S5 & Fitch) :: &#9671;A  &#8870;  &#9633;A </I> ]</UL>
01.   &#9671;A		A
02.   |   &#172;N:A	H (for &#172;I)
03.   |   &#9671;&#172;A	2 RS
04.   | W1:  A	1 DD
05.   | W2:  &#172;A	3 DD

<PRE>
	A
#	?
W1	t
W2	f

&#9671;A, true in W1, so true.
&#9633;A, not true in all worlds, so false.
</PRE>

<H4>Notes</H4><UL>            

   <LI> # represents the actual world.  

 </LI> </UL> 
<HR><HR><A NAME="771"></A><H2>771.  A  &#8870;  &#9633;A </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler :: Proofs of Nonconsequence :: Examples (Gensler S5 & Fitch) :: A  &#8870;  &#9633;A </I> ]</UL>
01.   A		A
02.   |   &#172;&#9633;A	H (for &#172;I)
03.   |   &#9671;&#172;A	2 RS
04.   | W1:   &#172;A	3 DD

<PRE>
	A
#	t
W1	f

A, true in the actual world, so true
&#9633;A, not true in all worlds (false in the actual world), so false
</PRE>  

 
<HR><HR><A NAME="772"></A><H2>772.  &#9671;A, &#9671;B  &#8870;  &#9633;&#172;A </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Alethic Logic :: Inference Theory :: Calculi :: Gensler :: Proofs of Nonconsequence :: Examples (Gensler S5 & Fitch) :: &#9671;A, &#9671;B  &#8870;  &#9633;&#172;A </I> ]</UL>
01.   &#9671;A		A
02.   &#9671;B		A
03.   |   &#172;&#9633;&#172;A	H (for &#172;I)
04.   |   &#9671;&#172;&#172;A	3 RS
05.   |   W1:   A	1 DD
06.   |   W2:   B	2 DD
07.   |   W3:   &#172;&#172;A	4 DD
08.   |   W3:   A	7 &#172;E

<PRE>
	A	B
#	?	?
W1	t	?
W2	?	t
W3	t	?

&#9671;A, true (W1 and W3)
&#9671;B, true (W2)
&#9633;&#172;A, false (W1 and W3)
</PRE>  

 
<HR><HR><A NAME="773"></A><H2>773.  Modal Deontic Logic </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic </I> ]</UL>
<H4>Description</H4>                

   <P> Deontic logics study the concepts relating to things that are obligatory (ought to be done) and things that are permissible.  

 </P>  
<HR><HR><A NAME="774"></A><H2>774.  Obligatory </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Lexical Elements :: Obligatory </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> O&#934;

</LI> </UL> <H4>Description</H4>                

   <P> It ought to be that &#934;., It's obligatory that &#934;.  

 </P>  
<HR><HR><A NAME="775"></A><H2>775.  Permissible </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Lexical Elements :: Permissible </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> R&#934;

</LI> </UL> <H4>Description</H4>                

   <P> It's permissible that &#934;.  It's all right that &#934;.  

 </P>  
<HR><HR><A NAME="776"></A><H2>776.  Formation Rules </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Well-Formed Formulas (WFFs) :: Formation Rules </I> ]</UL>
<H4></H4><OL>            

   <LI> If &#934; is a wff, so are O&#934; and R&#934;.

</LI> </OL> <H4>Notes</H4><UL>            

   <LI> This rule may be added to the set of formation rules for any other logic.  

 </LI> </UL> 
<HR><HR><A NAME="777"></A><H2>777.  "It's obligatory that A." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "It's obligatory that A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> OA  

 </LI> </UL> 
<HR><HR><A NAME="778"></A><H2>778.  "It's permissible that A." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "It's permissible that A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> RA  

 </LI> </UL> 
<HR><HR><A NAME="779"></A><H2>779.  "It's not permissible that A." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "It's not permissible that A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;RA

   </LI> <LI> O&#172;A  

 </LI> </UL> 
<HR><HR><A NAME="780"></A><H2>780.  "It ought not be the case that A." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "It ought not be the case that A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#172;A  

 </LI> </UL> 
<HR><HR><A NAME="781"></A><H2>781.  "It ought to be that A and B." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "It ought to be that A and B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O(A &#8743; B)  

 </LI> </UL> 
<HR><HR><A NAME="782"></A><H2>782.  "It's all right that A or B." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "It's all right that A or B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> R(A &#8744; B)  

 </LI> </UL> 
<HR><HR><A NAME="783"></A><H2>783.  "It ought not be the case that both A and B." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Formalization Hints :: Propositional :: "It ought not be the case that both A and B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#172;(A &#8743; B)  

 </LI> </UL> 
<HR><HR><A NAME="784"></A><H2>784.  "It isn't obligatory that everyone/everything be A." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: "It isn't obligatory that everyone/everything be A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;O&#8704;xAx  

 </LI> </UL> 
<HR><HR><A NAME="785"></A><H2>785.  "it's obligatory that not everyone/everything be A." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: "it's obligatory that not everyone/everything be A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#172;&#8704;xAx  

 </LI> </UL> 
<HR><HR><A NAME="786"></A><H2>786.  "It's obligatory that everyone/everything not be A." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: "It's obligatory that everyone/everything not be A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#8704;x&#172;Ax  

 </LI> </UL> 
<HR><HR><A NAME="787"></A><H2>787.  "It's obligatory that someone/something be A." </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Syntax :: Formalization Hints :: First-Order :: "It's obligatory that someone/something be A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#8707;xAx  

 </LI> </UL> 
<HR><HR><A NAME="788"></A><H2>788.  Semantics </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Language :: Semantics </I> ]</UL>
'Ought' is meant in the sense of 'all things considered, it ought to be that P.'

There are at least two uses which differ from this:

-  I ought to take you to the movies since I promised.
(This would definitely be overridden by, 'ought to take my wife to the hospital.' )

- Ought to wear a tie to work since it's mandatory.  

 
<HR><HR><A NAME="789"></A><H2>789.  Reverse Squiggle (RS) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Inference Theory :: Calculi :: Gensler :: Inference Rules :: Reverse Squiggle (RS) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;O&#934;  &#8870;  R&#172;&#934;

   </LI> <LI> &#172;R&#934;  &#8870;  O&#172;&#934;  

 </LI> </UL> 
<HR><HR><A NAME="790"></A><H2>790.  O Elimination (.O.E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Inference Theory :: Calculi :: Gensler :: Inference Rules :: O Elimination (.O.E) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> R&#934;  &#8870;   [Dn:]  &#934;  

 </LI> </UL> 
<HR><HR><A NAME="791"></A><H2>791.  R Elimination (.R.E) </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Deontic Logic :: Inference Theory :: Calculi :: Gensler :: Inference Rules :: R Elimination (.R.E) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#934;   &#8870;   Dn:  &#934;<BR>
   That is, since O&#934; means that &#934; is true in all worlds, &#934; may be brought into ANY world including the actual world (D0:).  

 </LI> </UL> 
<HR><HR><A NAME="792"></A><H2>792.  Modal Epistemic Logic </H2><UL>[ <I> Inference :: Deduction :: Frege-Russelle Extension Logics :: Modal Logics :: Modal Epistemic Logic </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Doxastic Logic

   </LI> <LI> Belief Logic

</LI> </UL> <H4>Description</H4>                

   <P> Epistemic logics study the concepts of 'x knows that P' and 'x believes that P'.  

 </P>  
<HR><HR><A NAME="793"></A><H2>793.  Applications </H2><UL>[ <I> Inference :: Deduction :: Applications </I> ]</UL>
<H4>Description</H4>                

   <P> It is <I>not</I> the business of logic weather or not the propositions of an argument are fact.  At the end of the day we find that logic can do just one thing, it can tell you if a set of propositions is consistent.  If they are consistent, then the very best we can say is that we have a "possibly correct" theory.  

 </P>  
<HR><HR><A NAME="794"></A><H2>794.  Theory </H2><UL>[ <I> Inference :: Deduction :: Applications :: Theory </I> ]</UL>
<H4>Description</H4>                

   <P> A <I><B>theory</B></I> is a set of propositions (called axioms or postulates) and all the logical consequences of those propositions.  Logic's role is to tell us if the axioms are consistent.
  

 </P>  
<HR><HR><A NAME="795"></A><H2>795.  Axioms </H2><UL>[ <I> Inference :: Deduction :: Applications :: Axiomitization of Theories :: Axioms </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Postulate

</LI> </UL> <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4><OL>            

   <LI> A fundamental proposition of a theory taken as a tautology

</LI> </OL> <H4>Related Terms</H4><UL>            

   <LI> Meaning Postulate - An axiom that describes the meaning of a predicate by relating it to other predicates.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> An axiom is usually in the form of a definition or statement of the properties of some predicate by relating it to other predicates.  

   </LI> <LI> Axioms are different from other kinds of tautologies in that axioms can only be said to be true because they are accepted as such.  

 </LI> </UL> 
<HR><HR><A NAME="796"></A><H2>796.  Left/Right Axioms </H2><UL>[ <I> Inference :: Deduction :: Applications :: Axiomitization of Theories :: Axioms :: Examples :: Left/Right Axioms </I> ]</UL>
<H4>Description</H4>                

   <P> This is a simple system which describes the spacial relations 'left' and 'right'.

</P>  <H4>Dictionary</H4><UL>            

   <LI> LeftOf(x,y)

   </LI> <LI> RightOf(x,y)

</LI> </UL> <H4>Axioms</H4><UL>            

   <LI> AxAy(LeftOf(x,y) &#8596; RightOf(y,x))<BR>
      x is left of y, iff y is right of x.

   </LI> <LI> &#172;ExLeftOf(x,x)<BR>
      Nothing can be left of itself.

   </LI> <LI> &#172;ExRightOf(x,x) <BR>
      Nothing can be right of itself.  

 </LI> </UL> 
<HR><HR><A NAME="797"></A><H2>797.  Height Axioms </H2><UL>[ <I> Inference :: Deduction :: Applications :: Axiomitization of Theories :: Axioms :: Examples :: Height Axioms </I> ]</UL>
<H4>Description</H4>                

   <P> Here's a simple axiomatic system which describes height relations among people.

</P>  <H4>Dictionary</H4><UL>            

   <LI> Txy, x is taller than y

</LI> </UL> <H4>Axioms</H4><UL>            

   <LI> T1: &#8704;x&#8704;y&#8704;z((Txy &#8743; Tzy) &#8594; Txz)

   </LI> <LI> T2: &#8704;x&#8704;y(Txy &#8594; &#172;Tyx)  

 </LI> </UL> 
<HR><HR><A NAME="798"></A><H2>798.  Euclid's Postulates </H2><UL>[ <I> Inference :: Deduction :: Applications :: Axiomitization of Theories :: Axioms :: Examples :: Euclid's Postulates </I> ]</UL>
<H4>Descriptions</H4>                

   <P> Many axiomatic systems are not formalized.  Probably the most famous is Euclid's geometry.

</P>  <H4>Axioms</H4><OL>            

   <LI> A straight line segment can be drawn joining any two points.

   </LI> <LI> Any straight line segment can be extended indefinitely in a straight line.

   </LI> <LI> Given any straight line segment, a circle can be drawn having the segment as radius and one endpoint as center.

   </LI> <LI> All right angles are congruent.

   </LI> <LI> If two lines are drawn which intersect a thrid in such a way that the sum of the inner angles on one side is less than two right angles, then the two lines inevitably must intersect each other on that side if exteded far enough.  This postulate is equivalent to what is known as the parallel postulate.  

 </LI> </OL> 
<HR><HR><A NAME="799"></A><H2>799.  Proof Steps </H2><UL>[ <I> Inference :: Deduction :: Applications :: Axiomitization of Theories :: Axioms :: Use in Proofs :: Proof Steps </I> ]</UL>
<H4>Description</H4>                

   <P> An axiom may be introduced at any step of a formal two-column proof.  The justification is usually just the name of the axiom.  Since an axiom is a tautology (at least within the theory), no previous steps are cited.  (Just as if you were introducing a theorem).  

 </P>  
<HR><HR><A NAME="800"></A><H2>800.  Theorems </H2><UL>[ <I> Inference :: Deduction :: Applications :: Axiomitization of Theories :: Axioms :: Use in Proofs :: Theorems </I> ]</UL>
<H4>Description</H4>                

   <P> In an axiomatic system, a theorem is a conclusion derived (via proof) from and only from one or more axioms of the system.  

 </P>  
<HR><HR><A NAME="801"></A><H2>801.  Axiom Schemas </H2><UL>[ <I> Inference :: Deduction :: Applications :: Axiomitization of Theories :: Axiom Schemas </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Sometimes it's not possible to list all the axioms of a system because the number is large or infinite.  However, when this is the case, it is also often true that many of the axioms follow some 'pattern'.  If that pattern can be stated formally such that each substitution instance of the pattern is in fact one of the axioms, then the pattern is called an axiom schema.

   </P> <P> In proofs, an axiom schema is used just like an axiom.  

 </P>  
<HR><HR><A NAME="802"></A><H2>802.  Formal Arithmetic </H2><UL>[ <I> Inference :: Deduction :: Applications :: Formal Arithmetic </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Peano Arithmetic

</LI> </UL> <H4>Description</H4>                

   <P> A formal axiomatic theory of the arithmetic of nonnegative integers.  The resulting formal system generates proofs for many arithmetical tautologies and enables us to formalize and evaluate a great deal of arithmetical reasoning.

   </P> <P> Arithmetic can be regarded as an extension of the predicate calculus with identity.  It is obtained by adding one name and three function symbols to the vocabulary
  

 </P>  
<HR><HR><A NAME="803"></A><H2>803.  Dictionary </H2><UL>[ <I> Inference :: Deduction :: Applications :: Formal Arithmetic :: Dictionary </I> ]</UL>
<H4>Object Symbols</H4><UL>            

   <LI> 0,  the name (object symbol) used to designate the number zero.

</LI> </UL> <H4>Function Symbols</H4><UL>            

   <LI> x+y,  the arithmetic operation of addition.

   </LI> <LI> x*y,  the arithmetic operation of multiplication.

   </LI> <LI> sx,  the successor of x.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Natural numbers other than '0' are treated as abbreviations for functional expressions built up from 's' and '0'.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> 1&#8797;'s0'

   </LI> <LI> 2&#8797;'ss0'

   </LI> <LI> 3&#8797;'sss0'

   </LI> <LI> 4&#8797;'ssss0'  

 </LI> </UL> 
<HR><HR><A NAME="804"></A><H2>804.  Axioms </H2><UL>[ <I> Inference :: Deduction :: Applications :: Formal Arithmetic :: Axioms </I> ]</UL>
<H4></H4><OL>            

   <LI> &#8704;x&#172;0=sx

   </LI> <LI> &#8704;x&#8704;y(sx=sy &#8594; x=y)

   </LI> <LI> &#8704;x(x+0)=x

   </LI> <LI> &#8704;x&#8704;y(x+sy)=s(x+y)

   </LI> <LI> &#8704;x(x*0)=0

   </LI> <LI> &#8704;x&#8704;y(x*sy)=((x*y)+x)  

 </LI> </OL> 
<HR><HR><A NAME="805"></A><H2>805.  &#8870;  2 + 2 = 4 </H2><UL>[ <I> Inference :: Deduction :: Applications :: Formal Arithmetic :: Theorems :: &#8870;  2 + 2 = 4 </I> ]</UL>
<H4>Sequent</H4>                

   <P> &#8870;  2 + 2 = 4<BR>
   &#8870;  ss0 + ss0 = ssss0

</P>  <H4>Proof</H4>                

<PRE>
1.  &#8704;x&#8704;y(x + sy) = s(x + y)	Axiom 4
2.  &#8704;y(ss0 + sy) = s(ss0 + y)	1 &#8704;E (x replaced by ss0)
3.  (ss0 + ss0) = s(ss0 + s0)	2 &#8704;E (y replaced by s0)
4.  (ss0 + s0) = s(ss0 + 0)	2 &#8704;E (y replaced by 0)
5.  (ss0 + ss0) = ss(ss0 + 0)	3,4 =E
6.  &#8704;x(x + 0) = x		Axiom 3
7.  (ss0 + 0) = ss0		6 &#8704;E
8.  (ss0 + ss0) = ssss0	5,7 =E
</PRE>  

  
<HR><HR><A NAME="806"></A><H2>806.  &#8870;  1 * 2 = 2 </H2><UL>[ <I> Inference :: Deduction :: Applications :: Formal Arithmetic :: Theorems :: &#8870;  1 * 2 = 2 </I> ]</UL>
<H4>Sequent</H4>                

   <P> &#8870;  1 + 2 = 2<BR>
   &#8870;  s0 * ss0 = ss0

</P>  <H4>Proof</H4>                

<PRE>
1.  &#8704;x&#8704;y(x*sy) = ((x*y) + x)		Axiom 6
2.  &#8704;y(s0*sy) = ((s0 * y) + s0)		1 &#8704;E (x &#8594; s0)
3.  (s0 * ss0) = ((s0 * s0) + s0)		2 &#8704;E (y &#8594; s0)
4.  &#8704;x&#8704;y(x + xy) = s(x + y)		Axiom 4
5.  &#8704;y((s0 * s0) + sy) = s((s0 * s0) + y)	4 &#8704;E(x &#8594; s0)
6.  ((s0*s0) + s0) = s((s0*s0) + 0)	5 &#8704;E (y &#8594; 0)
7.  (s0 * ss0) = s((s0 * s0) + 0)		3,6 =I
8.  &#8704;x(x+0) = x			Axiom 3
9.  ((s0*s0) + 0) = (s0 * s0)		8 &#8704;E (x &#8594; (s0 * s0))
10.  (s0 * ss0) = s(s0 * s0)		7,9 =I
11.  (s0 * s0) = ((s0*0) + s0)		2 &#8704;E
12.  (s0*ss0) = s((s0*0) + s0)		10,11 =I
13.  &#8704;x(x*0)=0			Axiom 5
14.  (s0*0) = 0			13 &#8704;E (x &#8594; s0)
15.  (s0*ss0) = s(0 + s0)		12,14 =I
16.  &#8704;y(0 + sy) = s(0 + y)		4 &#8704;E (x &#8594; 0)
17.  (0 + s0) = s(0 + 0)		16 &#8704;E (y &#8594; 0)
18.  (s0 * ss0) = ss(0 + 0)		15,17 =I
19.  (0 + 0) = 0			8 &#8704;E (x -> 0)
20.  (s0 * ss0) = ss0			18,19 =I
</PRE>  

  
<HR><HR><A NAME="807"></A><H2>807.  Comments </H2><UL>[ <I> Inference :: Deduction :: Applications :: Formal Arithmetic :: Comments </I> ]</UL>
<H4></H4>                

   <P> Notice that much of the work in these two theorems involved &#8704;E .  In math, initial universal quantifiers are generally omitted, so that Axiom 5, for example, would be written simply as '(x * 0) = 0'.  Moreover, proofs in mathematics often skip many of the steps (such as &#8704;E) which are required for complete formal rigor.  Given the demanding complexity of formal proof exemplified in these theorems, such shortcuts are obviously both desirable and necessary.  

 </P>  
<HR><HR><A NAME="808"></A><H2>808.  Mathematical Induction </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction </I> ]</UL>
<H4>Description</H4>                

   <P> The name is a bit mileading; mathematical induction is actually a form of deductive reasoning that hinges on a special kind of definition.  Mathematical Induction is a generalization of the rule &#8594;E.  

 </P>  
<HR><HR><A NAME="809"></A><H2>809.  Overview </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Overview </I> ]</UL>
<H4>Description</H4>                

   <P> The easiest way to describe mathematical induction is by demonstration.  The idea is this:  We wish to prove that a certain general fact holds for all the nonnegative integers, i.e., that they all have a certain (possibly very complex) property.  To do this, it suffices to show two things.

<PRE>
   </PRE> <P>(1) Zero has this property.

   </P> <P>(2) For any number x, if x has this perperty, then so does the successor of x.
</P>

   </P> <P> This second proposition is equivalent to the following sequence of conditional statements:

<PRE>
   If 0 has this property, then so does 1.<BR>
   If 1 has this property, then so does 2.<BR>
   If 2 has this property, then so does 3.<BR>
   etc.
</PRE>

   </P> <P> Now condition (1) together with the first of these conditionals implies by &#8594;E that 1 has the property in question.  And the fact that 1 has this property, together with the second conditional, implies by &#8594;E that 2 has it, and so on.  Thus, by what are in effect infinitely many steps of &#8594;E, we reach the conclusion that all the nonnegative integers have this property, i.e.,that the desired generalization is true of them all.

   </P> <P> We can't actually write out a proof containing infinitely many steps of &#8594;E.  Yet the reasoning outlined above is clearly valid.  So we adopt a new rule of inference which licenses this reasoning.  Intuitively, the rule says that if we have established conditions (1) and (2), we can directly infer:

<PRE>
   (3) Every nonnegative integer has this property.
</PRE>  

 </P>  
<HR><HR><A NAME="810"></A><H2>810.  Definition </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Definition </I> ]</UL>
<H4>Mathematical Induction</H4>                

   <P> Given a wff P containing the name letter '0' and a wff of the form &#8704;a(P(a/0) &#8594; P(sa/0)), we may infer &#8704;aP(a/0), where P(a/0) is the result of replacing one or more occurrences of '0' in P by some variable a not already in P, and P(sa/0) is the result of replacing those same occurrences of '0' in P by sa.  

 </P>  
<HR><HR><A NAME="811"></A><H2>811.  Mathematical Induction (MI) </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Inference Rule :: Mathematical Induction (MI) </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> Pb, &#8704;x(P(x/b) &#8594; P(sx/b))  &#8870;  &#8704;xP(x/b)<BR>
   </LI> <LI> Pb, &#8704;x(Px &#8594; Psx)  &#8870;  &#8704;xPx<BR>

   where,<BR>
   Px, (predicate) the property being proven<BR>
   b, is the base case<BR>
   sx, is the successor function from Peano Arithmetic

</LI> </UL> <H4>Description</H4>                

   <P> The two premises correlate to (1) and (2) from the "Overview" respectively.  The conclusion correlates to (3).

   </P> <P> First show that the base element in the set is P.  Next, show that the successor of any P in the set is also P.  This entitles us to conclude that all elements in the set are P.  

 </P>  
<HR><HR><A NAME="812"></A><H2>812.  Components </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Components </I> ]</UL>
<H4>Description</H4>                

   <P> Proof by mathematical induction requires two components:  An inductive definition, and a proof.  

 </P>  
<HR><HR><A NAME="813"></A><H2>813.  Inductive Definition </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Components :: Inductive Definition </I> ]</UL>
<H4>Description</H4>                

   <P> An inductive definition is usually formed as a series of steps called clauses.

   </P> <P> Start by specifying the simplest members of the defined collection, then give rules that tell how to generate 'new' members (successors) of the collection from the existing ones.  Inductive definitions are recursive.

</P>  <H4>Examples</H4><UL>            

   <LI> The rules for a WFF.  

 </LI> </UL> 
<HR><HR><A NAME="814"></A><H2>814.  Base Clause </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Components :: Inductive Definition :: Base Clause </I> ]</UL>
<H4>Description</H4>                

   The first clause specifies the most fundamental cases.

 <H4>Example</H4><UL>            

   <LI> 1.  Each proposition symbol is a WFF.  

 </LI> </UL> 
<HR><HR><A NAME="815"></A><H2>815.  Inductive Clauses </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Components :: Inductive Definition :: Inductive Clauses </I> ]</UL>
<H4>Description</H4>                

   <P> These clauses describe how to derive new cases from existing ones.

</P>  <H4>Examples</H4><UL>            

   <LI> 2.  If P is a wff, so is &#172;P.
   </LI> <LI> 3.  If P and Q are wff's, so are (P ^ Q), (P v Q), (P -> Q) and (P <-> Q).  

 </LI> </UL> 
<HR><HR><A NAME="816"></A><H2>816.  Final Clause </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Components :: Inductive Definition :: Final Clause </I> ]</UL>
<H4>Description</H4>                

   <P> This clause informs that all valid cases are are attainable only by proper application of earlier clauses.

</P>  <H4>Examples</H4><UL>            

   <LI> 4.  Nothing is a WFF unless it is generated by repeated applications of (1), (2) and (3).  

 </LI> </UL> 
<HR><HR><A NAME="817"></A><H2>817.  Set-Theoretic Definition </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Components :: Inductive Definition :: Set-Theoretic Definition </I> ]</UL>
<H4>Description</H4>                

   <P> As it turns out, the final clause of the normal inductive definition cannot be expressed in FOL.  The entire definition can be expressed in FOL if we add an introductory clause and alter the clauses just a bit to make use of set theory.  Then the final clause can be eliminated.

</P>  <H4>Examples</H4><UL>            

   <LI> The set S of WFF's is the smallest set satisfying the following clauses:<BR>
<BR>
   1.  Each sentence symbol is in S.<BR>
   2.  If P is in S, so is &#172;P.<BR>
   3.  If P and Q are in S, so are (P ^ Q), (P v Q), (P -> Q) and (P <-> Q).

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> The reason the introductory clause says 'smallest set' is because we want to explicitly disallow sets in which all WFF's are included as a subset.  Each clause defines the parameters for some set.  'smallest' means that we only want the intersections of these sets.  

 </LI> </UL> 
<HR><HR><A NAME="818"></A><H2>818.  Proof by Mathematical Induction </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Components :: Proof by Mathematical Induction </I> ]</UL>
<H4>Description</H4>                

   <P> All proofs that use Mathematical Induction follow a common format.  This section outlines that basic format.  

 </P>  
<HR><HR><A NAME="819"></A><H2>819.  Basis Step </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Components :: Proof by Mathematical Induction :: Basis Step </I> ]</UL>
<H4>Description</H4>                

   <P> First, Show that the first item (base case) has the desired property.  This is called <I> The Basis Step</I>.  

 </P>  
<HR><HR><A NAME="820"></A><H2>820.  Inductive Step </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Components :: Proof by Mathematical Induction :: Inductive Step </I> ]</UL>
<H4>Description</H4>                

   <P> Next, show that if a simpler case has property Q, then so will the more complex cases generated by the inductive clauses of the Inductive Definition.  When this step is completed, we want to have inferred something of the form:  &#8704;x(Px &#8594; Psx).  This is usually the conclusion of a hypothetical argument.  

 </P>  
<HR><HR><A NAME="821"></A><H2>821.  Inductive Hypothesis </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Components :: Proof by Mathematical Induction :: Inductive Step :: Inductive Hypothesis </I> ]</UL>
<H4>Description</H4>                

   <P> Usually the initial sentence of the inductive step.  It assumes the existance of one or more instances of the Base Clause for the inductive definition and asserts that for each of these cases, Q holds.  

 </P>  
<HR><HR><A NAME="822"></A><H2>822.  &#8870;  &#8704;x((0 + x) = x) </H2><UL>[ <I> Inference :: Deduction :: Applications :: Mathematical Induction :: Components :: Proof by Mathematical Induction :: Examples :: &#8870;  &#8704;x((0 + x) = x) </I> ]</UL>
<H4>Proof</H4>                

<PRE>
1.   &#8704;x(x+0) = x			Peano Axiom 3
2.   (0 + 0) = 0			1 &#8704;E (0 for x)
3.   |   (0 + a) = a			H (for &#8594;I)
4.   |   &#8704;x&#8704;y(x + sy) = s(x + y)		Peano Axiom 4
5.   |   &#8704;y(0 + sy) = s(0 + y)		4 &#8704;E
6.   |   (0 + sa) = s(0 + a)		5 &#8704;E
7.   |   (0 + sa) = sa			3,6 =E
8.   (0 + a) = a &#8594; (0 + sa) = sa		3-7 &#8594;I
9.   &#8704;x((0 + x) = x &#8594; (0 + sx) = sx)	8 &#8704;I
10. &#8704;x(0 + x) = x			2,9 MI
</PRE>

 <H4>Notes</H4><UL>            

   <LI> This proof illustrates a pattern common to most elementary proofs by mathematical induction.  (Basis Steps: 1 - 2) Zero is shown to have the relevant property.  (Inductive Step:  3 - 9) Then a hypothetical derivation followed by a step of &#8594;I and a step of &#8704;I establishes that if any number has this property, so does that number's successor.  The final steps of these two give what's needed to apply MI to get the desired conclusion.  

 </LI> </UL> 
<HR><HR><A NAME="823"></A><H2>823.  Induction </H2><UL>[ <I> Inference :: Induction </I> ]</UL>


<H4>Description</H4>                

   <P> Induction deals with arguments of propositions which are claimed to have less than maximal inductive probabilities (i.e. probabilites less than 1.0).  For such arguments, it is possible for the conclusion to be false while all the premises are true.

</P>  <H4>Notes</H4><UL>            

   <LI> For inductive arguments, Statistical Syllogisms are strongest (Most reliable conclusions, given the premise).  Statistical Generalizations are slightly weaker while Analogical Reasoning and Causal Reasoning (by Mill's Methods) are the weakest.  Conclusions through the Probability Calculus are about as strong as the weakest premise, the calculus itself does not actually introduce doubt.  

 </LI> </UL> 
<HR><HR><A NAME="824"></A><H2>824.  Statement Strength (Varzi) </H2><UL>[ <I> Inference :: Induction :: Statement Strength (Varzi) </I> ]</UL>
<H4>Description</H4>                

   <P> Often, the probability of an argument is not known.  The only thing that can be said is that its liklihood is 'strong' or 'weak'.

   </P> <P> If even one premise of an inductive argument is knovn only by strength, the entire argument is an argument of strength rather than probability.  

 </P>  
<HR><HR><A NAME="825"></A><H2>825.  Strength Determination </H2><UL>[ <I> Inference :: Induction :: Statement Strength (Varzi) :: Strength Determination </I> ]</UL>
<H4>Description</H4>                

   <P> Strength is determined by what the statement says.  The more it says, the stronger it is, regardless of its truth value.

   </P> <P> It is approximately inversely related to what's called its a priori probability.

   </P> <P> The stronger a statement is, the less inherently likely it is to be true.

</P>  <H4>Examples</H4><UL>            

   <LI>There are exactly 200 cities with populations over 100,000 in the US. (strong)

   </LI> <LI> Something is happening somewhere.  (weak)

   </LI> <LI> Waldo both is and is not a cat.  (strong - Also false.  Being a contradiction, it logically implies every statement.  (see derived rules of prop calc))

</LI> </UL> <H4>sNotes</H4><UL>            

   <LI> My note:  Strong statements seem to be more specific in terms of their facts.  

 </LI> </UL> 
<HR><HR><A NAME="826"></A><H2>826.  Negation of </H2><UL>[ <I> Inference :: Induction :: Statement Strength (Varzi) :: Negation of </I> ]</UL>
<H4>Description</H4>                

   <P> The negation of a strong statement is weak and vice-versa.  

 </P>  
<HR><HR><A NAME="827"></A><H2>827.  Rule 1 </H2><UL>[ <I> Inference :: Induction :: Statement Strength (Varzi) :: Relative Strength (ranking) :: Rule 1 </I> ]</UL>
<H4>Description</H4>                

   <P> If statement A deductively implies statement B but B does not deductively imply A, then A is stronger than B.  

 </P>  
<HR><HR><A NAME="828"></A><H2>828.  Rule 2 </H2><UL>[ <I> Inference :: Induction :: Statement Strength (Varzi) :: Relative Strength (ranking) :: Rule 2 </I> ]</UL>
<H4>Description</H4>                

   <P> If statement A is logically equivalent to statement B (i.e. if A and B deductively imply one another), then A and B are equal in strength.  

 </P>  
<HR><HR><A NAME="829"></A><H2>829.  Probabilities </H2><UL>[ <I> Inference :: Induction :: Probabilities </I> ]</UL>
<H4>Description</H4>                

   <P> In probability logic, the inductive probabaility of each proposition in the argument is quantifiable.  That is, the truth of the proposition has a probability in the range of 0 (absolutely false) to 1 (absolutely true) or any value in between.  

 </P>  
<HR><HR><A NAME="830"></A><H2>830.  Populations & Samples </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples </I> ]</UL>
<H4>Description</H4>                

   <P> This section describes the inductive arguments among sets.  

 </P>  
<HR><HR><A NAME="831"></A><H2>831.  Statistical Syllogism (population  ==>  subset) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Statistical Syllogism (population  ==>  subset) </I> ]</UL>
<H4>Description</H4>                

   <P> A statistical syllogism is an argument which reasons from a population to a subset or member.  It has the following form:

<PRE>
	n% of F are G.
	x is F.
	This is all we know about the matter.
   <B>&#8756;</B>	It's n% probable that x is G.
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>
	98% of college freshmen can read beyond the 6th-grade level.
	Dave is a college freshmen.
	This is all we know about the matter
   <B>&#8756;</B>	It's 98% probable that Dave can read beyond the 6th-grade level.
   </PRE>  

 </LI> </UL> 
<HR><HR><A NAME="832"></A><H2>832.  Statistical Generalization (subset  ==>  population) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Statistical Generalization (subset  ==>  population) </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Sample-Projection Syllogism

   </LI> <LI> Inductive Generalization

</LI> </UL> <H4>Description</H4>                

   <P> A statistical generalization is an argument which reasons from sample to a population.  Ideally, the population is representative, however this cannot be usually possible or practical so the next best thing is a large and varied (randomly selected) sample.  It has the following form:

<PRE>
	n% of examined F's are G's.
	A large and varied group of F's has been examined.
   <B>&#8756;</B>	Probably roughly n% of all F's are G's.

Where,
   s, sample size
   F, the population (total set) about which we are generalizing.
   G, property studied by the survey.
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> 	Fewer than 1% of (1000 ball bearings randomly selected for testing from the 1997 production run of the Saginaw plant) (failed to meet specifications).
^	Only a small percentage of all ball bearings produced during the 1997 production run at the Saginaw plant failed to meet specifications.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Three factors determine the strength of the projection (conclusion):  The <I>size</I> of the sample, the <I>variety</I> of the sample, the <I>cautiousness</I> of the conclusion.  

 </LI> </UL> 
<HR><HR><A NAME="833"></A><H2>833.  Analogical Reasoning (set  ==>  set) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Argument by Analogy

</LI> </UL> <H4>Description</H4>                

   <P> While statistical syllogism and statistical generalization are the complementary operations of reasoning from a population to a subset and vice-versa, analogical reasoning is the operation of reasoning from one set to another, similar set (wheather one is a subset of the other is not known).

   </P> <P> Argument by analogy is similar to generalization.  In this way it can be thought of as consisting of two parts.  First there is the generalization part where the arguer begins with one or more instance and proceeds to draw a conclusion about all the members of a class.  The arguer may then apply the generalization to one or more members of this class that were not noted earlier.  The first part is inductive, the second is deductive.

   </P> <P> In argumentby analogy, the arguer proceeds directly fromone or more individual instances to aconclusion about another individual instance without appealing to any intermediate generalization.  Such argument is purely inductive.

</P>  <H4>Characteristics</H4><OL>            

   <LI> A comparison is made between two or more objects with respect to their sharing various properties in common.
   </LI> <LI> All but one of these objects is claimed to have an additional property.
   </LI> <LI> The conclusion is drawn that the other objects has the property.

</LI> </OL> <H4>Notes</H4><UL>            

   <LI> The fundamental concept behind formal logic, the concept that an argument is valid because it follows the form of other arguments that are valid, is a form of analogical induction.  Until the invention of metalogic, this was all the certainty that anyone could have in deductive systems.  Metalogic shows, without doubt, the validity of formal systems base upon the semantics of the logical operations.  

 </LI> </UL> 
<HR><HR><A NAME="834"></A><H2>834.  Argument Form </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Argument Form </I> ]</UL>
<H4>Description</H4>                

   <P> In observing that an object x has many properties, F1, F2, ..., Fn in common with some other object y, we observe that y has some further property G.	Hence we conclude that x and y probably have G in common as well.

<PRE>
	F1x ^ F2x ^ ... ^ Fnx 
	F1y ^ F2y ^ ... ^ Fny
	Gy
^	Gx
</PRE>

   </P> <P> If properties F1 through Fn are connected in some important way to G (that is, are relevant to G), the argument is usually strong.  If they are not so connected (that is, are irrelevant to G), the argument is usually weak.

</P>  <H4>Genzler's Form</H4>                

<PRE>
	Most things true of X also are true of Y.
	X is A.
	This is all we know about the matter.
^	Probably Y is A.
</PRE>  

  
<HR><HR><A NAME="835"></A><H2>835.  Analogues </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Terms :: Analogues </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The items being compared in an argument of analogy.  

 </P>  
<HR><HR><A NAME="836"></A><H2>836.  Primary </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Terms :: Analogues :: Primary </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The analogues which are known to have all the properties.  The analogues to which we compare the item we are trying to induce something more about.  

 </P>  
<HR><HR><A NAME="837"></A><H2>837.  Secondary </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Terms :: Analogues :: Secondary </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The analogue to which we are trying to attribute one or more new properties based upon the fact that it already possesses other similarities to the primary analogues.  

 </P>  
<HR><HR><A NAME="838"></A><H2>838.  Similarities </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Terms :: Similarities </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The properties that apply to both primary and secondary analogues  

 </P>  
<HR><HR><A NAME="839"></A><H2>839.  Known </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Terms :: Similarities :: Known </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The similarities known for a fact to be properties of both the primary and secondary analogues.  

 </P>  
<HR><HR><A NAME="840"></A><H2>840.  Induced </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Terms :: Similarities :: Induced </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The properties induced from the primary to the secondary analogues.  

 </P>  
<HR><HR><A NAME="841"></A><H2>841.  Disanalogues </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Terms :: Disanalogues </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Properties which are differences between the analogues.  

 </P>  
<HR><HR><A NAME="842"></A><H2>842.  Evaluation Criteria </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Evaluation Criteria </I> ]</UL>
<H4>Description</H4>                

   <P> Argument by analogy is inductive, therefore analagous arguments are not without problem.  The following criteria should help to separate the weak arguments from the strong ones.  

 </P>  
<HR><HR><A NAME="843"></A><H2>843.  Relevance of the similarities </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Evaluation Criteria :: Relevance of the similarities </I> ]</UL>
<H4>Description</H4>                

   <P> The known similarities should have some relevance to the induced similarities.

</P>  <H4>Examples</H4><UL>            

   <LI> Lucy is buying a car.  She decides to buy a Chevrolet because she wants good gas mileage and her friend Tom's new Chevy gets goot mileage.  To support her decision, Lucy argues that both cars have a padded steering wheel, tachometer, vinyl upholstery, tinted windwos, CD player, and white paint.  Lucy's argument is weak because these similarities are irrelevant to gas mileage.  On the other hand, if Lucy bases her conclusion on the fact that both cars have the same size engine, her argument is relatively strong, because engine size is relevant to gas mileage.  

 </LI> </UL> 
<HR><HR><A NAME="844"></A><H2>844.  Number of similarities </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Evaluation Criteria :: Number of similarities </I> ]</UL>
<H4>Description</H4>                

   <P> The greater the number of relevant known similarities between the primary and secondary analogues, the stronger the argument for the induced similarities.

</P>  <H4>Examples</H4>                

   <P> Lucy want to buy a new car.  One that is fuel efficient.  Lucy makes her decision based on the fact that both cars have the same size engine.  Her argument is relatively strong, because engine size is relevant to gas mileage.  If Lucy notes additional relevant similarities such as curb weight, aerodynamic body, gear ration and tires, then her argument becomes stronger.  

 </P>  
<HR><HR><A NAME="845"></A><H2>845.  Nature and degree of disanalogy </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Evaluation Criteria :: Nature and degree of disanalogy </I> ]</UL>
<H4>Description</H4>                

   <P> If the differences between the primary and secondary analogue form a strong argument against the induced similarities, the induced properties may be, in fact, weak. However, the nature of the disanalogies may in fact strengthen the argument as well. 

</P>  <H4>Examples</H4>                

   <P> (See Number of similarities)  If the car that Lucy intends to buy is equipped with a turbocharger, Lucy loves to make jackrabbit starts and screeching stops, Tom's car have overdrive, but Lucy's does not, and Lucy constantly drives on congested freeways while Tom drives on relatively clear freeways, Lucy's argument is weakened.

   </P> <P> If, on the other hand, all these disanalogies are attributed to Tom and his car, Lucy's argument is strengthened.  

 </P>  
<HR><HR><A NAME="846"></A><H2>846.  Number of primary analogues </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Evaluation Criteria :: Number of primary analogues </I> ]</UL>
<H4>Description</H4>                

   <P> Basically, every rule has exceptions.  So one primary analogue does not necessarily make for a strong argument.  However, several primary analogues significantly strengthens the argument.\

</P>  <H4>Examples</H4><UL>            

   <LI> (See Nature and degree of disanalogy)  Thus far, Lucy has base her conclusion on the similarity between the car she intends to buy and only one other car -- Tom's.  Now suppose that Lucy has three additional friends, that all of them drive cars of the same model and year as Tom's, and that all of the get good gas mileage.  These additional primary analogues strengthen Lucy's argument because they lessen the likelihood that Tom's good gas mileage is a freak incident.  On the other hand, supose that two of these additional friends get the same good gas maileage as Tomb ut that the third gets poor mileage.  As before, the first two cases tend to strnegthen Lucy's argument, but the third now tends to weaken it.  This third case is called a counteranalogy because it supports a conclusion opposed to that of the original analogy.  

 </LI> </UL> 
<HR><HR><A NAME="847"></A><H2>847.  Diversity among the primary analogues </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Evaluation Criteria :: Diversity among the primary analogues </I> ]</UL>
<H4>Examples (See: Number of primary analogues)</H4>                

   <P> Suppose now that Lucy's four friends (all of whom get good mileage) all buy their gas at the same station, have their cars tuned up regularly by the same mechanic, put the same friction-reducing additive in their oil, inflate their tires to the same pressure, and do their city driving on uncongested, level streets at a fuel-maximizing 28 miles per hour.  Such factors would tend to reduce the probability of Lucy's conclusion, because it is possible that one or a combination of them is responsible for the good mileage and that this factor (or combination of them) is absent in Lucy's caswe.  On the other hand, if Lucy's friends buy their gas at different stations, have their cars tuned up at different intervals by different mechanics, inflate their tires to different pressures, drive at different speeds on different grades, and have different attitudes toward using the oil additive, then it is less likely that the good gas mileage they enjoy is attributable to any factor other than the model and year of their car.  

 </P>  
<HR><HR><A NAME="848"></A><H2>848.  Specificity of the conclusion </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Evaluation Criteria :: Specificity of the conclusion </I> ]</UL>
<H4>Description</H4>                

   <P> More specific conclusions will often be weaker.

</P>  <H4>Examples  (See: Diversity amont the primary analogues).</H4>                

   <P> Lucy's conclusion is simply that her care will get "good" mileage.  If she now changes her conclusion to state that her car will get gas mileage "at least as good" as Tom's, then her argument is weakened.  Such a conclusion is more specifit an the earlier conclusion and is easier to falsify.  Thus, if her mileage were only one-tenth of a mile per gallon less than Tom's, her conclusion would turn out to be false.  Now suppose that Lucy changes her conclusion to stat that her car will get exactly the same gas mileage as Tom's.  This conclusion is even more specific than the "at least as good' conclusion, and it is easier still to falsify.  Thus, such a conclusion renders Lucy's argument much weaker than her original argument.  

 </P>  
<HR><HR><A NAME="849"></A><H2>849.  Legal Reasoning </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Populations & Samples :: Analogical Reasoning (set  ==>  set) :: Practical Analogy :: Legal Reasoning </I> ]</UL>
<H4>Description</H4>                

   <P> Legal reasoning is made largely through precedent -- the anaogical reasoning of a current case to previous, similar cases.  

 </P>  
<HR><HR><A NAME="850"></A><H2>850.  Probability </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Notation & Interpretation :: Probability </I> ]</UL>
<H4>Form</H4>                

   <P> P( A ) = n, wehre A is some event, and n is in [0,1]

</P>  <H4>Interpretation</H4>                

   <P> The probability of A is n.  

 </P>  
<HR><HR><A NAME="851"></A><H2>851.  Odds </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Notation & Interpretation :: Odds </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> OddsInFavor( A ) = n

   </LI> <LI> OddsAgainst( A ) = n

</LI> </UL> <H4>Interpretation</H4>                

   <P> The odds in favor of A is n.

   </P> <P> The odds agains A is n.

</P>  <H4>Semantics</H4>                

   <P> OddsInFavor( A ) = (number of favorable cases) to (number of unfavorable cases)

   </P> <P> OddsAgainst( A ) = (number of unfavorable cases) to (number of favorable cases)

</P>  <H4>Examples</H4><UL>            

   <LI> The odds are 6 to 1 against winning the race.

   </LI> <LI> The odds are 6 to 1 in favor of loosing the race.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> It is conventional to use the form which places the larger number first.  

 </LI> </UL> 
<HR><HR><A NAME="852"></A><H2>852.  Interpretations (Varzi) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Interpretations (Varzi) </I> ]</UL>
<H4>Description</H4>                

   <P> How we determine the value of n.  

 </P>  
<HR><HR><A NAME="853"></A><H2>853.  Subjective Interpretation </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Interpretations (Varzi) :: Subjective Interpretation </I> ]</UL>
<H4>Notation</H4>                

   <P> P(A) = n

</P>  <H4>Description</H4>                

   <P> Where n stands for the degree of belief a particular rational person has in proposition A at a given time.  Degree of belief is gauged behaviorally by the person's willingness to accept certain bets on the truth of A.  

 </P>  
<HR><HR><A NAME="854"></A><H2>854.  Logial Interpretation </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Interpretations (Varzi) :: Logial Interpretation </I> ]</UL>
<H4>Notation</H4>                

   <P> P(A) = n

</P>  <H4>Description</H4>                

   <P> Where n designates the logical or a priori probability of A.  There are many notions of logical probability, but according to all of them P(A) varies inversely with the information content of A.  That is, if A is a weak proposition whose information content is small, P(A) tends to be high, and if A is a strong proposition whose information content is great, then P(A) tends to be low.  

 </P>  
<HR><HR><A NAME="855"></A><H2>855.  Relative Frequency Interpretation </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Interpretations (Varzi) :: Relative Frequency Interpretation </I> ]</UL>
<H4>Notation</H4>                

   <P> P(A) = n

</P>  <H4>Description</H4>                

   <P> Where A is usually taken to be an event and P(A) is the frequency of occurrence of A relative to some specified reference class of events.  This is the interpretation of probability most often used in mathematics and statistics.  

 </P>  
<HR><HR><A NAME="856"></A><H2>856.  Classical Interpretation </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Interpretations (Varzi) :: Classical Interpretation </I> ]</UL>
<H4>Description</H4>                

   <P> Like the relative frequency interpretation, the classical interpretation usually takes the object A to be an event.  According to the classical interpretation, probabilities can be defined only when a situation has a finite nonzero number of equally likely possible outcomes, as for example in the toss of a fair die.  Here the number of equally likely outcomes is 6, one for each face of the die.  The probability of A is defined as the ration of the number of possible outcomes in which A occurs to the total number of possible outcomes:

   </P> <P> P(A) = NumOccurences / SampleSize  

 </P>  
<HR><HR><A NAME="857"></A><H2>857.  P( A ) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Probability Calculations (Genzler) :: P( A ) </I> ]</UL>
<H4>Description</H4>                

   <P> P( A ) = (number of favorable cases) / (total number of cases)  

 </P>  
<HR><HR><A NAME="858"></A><H2>858.  If A is a necessary truth </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Probability Calculations (Genzler) :: If A is a necessary truth </I> ]</UL>
<H4>Notation</H4>                

   <P> P( A ) = 1.00

   </P> <P> OddsInFavor( A ) = 1 to 0  

 </P>  
<HR><HR><A NAME="859"></A><H2>859.  If A is a contradiction </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Probability Calculations (Genzler) :: If A is a contradiction </I> ]</UL>
<H4>Notation</H4>                

   <P> P( A ) = 0.00

   </P> <P> OddsInFavor( A ) = 0 to 1  

 </P>  
<HR><HR><A NAME="860"></A><H2>860.  P( &#172; A ) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Probability Calculations (Genzler) :: P( &#172; A ) </I> ]</UL>
<H4>Description</H4>                

   <P> P( &#172;A ) = 1.00 - P( A )  

 </P>  
<HR><HR><A NAME="861"></A><H2>861.  P( A ^ B ), where A & B are independent </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Probability Calculations (Genzler) :: P( A ^ B ), where A & B are independent </I> ]</UL>
<H4>Description</H4>                

   <P> P( A ^ B ) = P( A ) * P( B )  

 </P>  
<HR><HR><A NAME="862"></A><H2>862.  P( A v B ), where A & B are mutually exclusive </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Probability Calculations (Genzler) :: P( A v B ), where A & B are mutually exclusive </I> ]</UL>
<H4>Description</H4>                

   <P> P( A v B ) = P( A ) + P( B )  

 </P>  
<HR><HR><A NAME="863"></A><H2>863.  P( A v B ), where A & B are not mutually exclusive </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Probability Calculations (Genzler) :: P( A v B ), where A & B are not mutually exclusive </I> ]</UL>
<H4>Description</H4>                

   <P> P( A v B ) = P( A ) + P( B ) - P( A ^ B )  

 </P>  
<HR><HR><A NAME="864"></A><H2>864.  P( A | B ) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Probability Calculations (Genzler) :: P( A | B ) </I> ]</UL>
<H4>Description</H4>                

   <P> The probability of A given B.

</P>  <H4>Solution</H4><UL>            

   <LI> P( A | B ) = P( B ) * P( A after B occurs )  

 </LI> </UL> 
<HR><HR><A NAME="865"></A><H2>865.  Axioms </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Axioms </I> ]</UL>
<H4>Description</H4>                

   <P> These axioms constitute a definition of the form of that of Mathematical Induction.  All properties of the probability calculus can be deduced from these three axioms as theorems.

</P>  <H4>The Axioms</H4><OL>            

   <LI> P(A) >= 0

   </LI> <LI> (A <-> (Q v &#172;Q))  <->  (P(A) = 1.0) <BR>
   i.e.  If A is tautologous, P(A) = 1

   </LI> <LI> &#172;&#9671;(A ^ B)  <->  (P(A v B) = P(A) + P(B))<BR>
   i.e.  If A and B are mutually exclusive, P(A v B) = P(A) + P(B)  

 </LI> </OL> 
<HR><HR><A NAME="866"></A><H2>866.  1.  P(&#172;A) = 1 - P(A) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic Theorems :: 1.  P(&#172;A) = 1 - P(A) </I> ]</UL>
<H4>Proof</H4>                

   <P> Since A v &#172;A is tautologous, by AX2 we have P(A v &#172;A) = 1.  And since A and &#172;A are mutually exclusive, by AX3, P(A v &#172;A) = P(A) + P(&#172;A).  Hence 1 = P(A) + P(&#172;A), and so P(&#172;A) = 1 - P(A).  This theorem tells how to obtain the probability of the negation of A, given the probability of A, and vice versa.  

 </P>  
<HR><HR><A NAME="867"></A><H2>867.  2.  If A is contradictory, P(A) = 0 </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic Theorems :: 2.  If A is contradictory, P(A) = 0 </I> ]</UL>
<H4>Proof</H4>                

   <P> Suppose A is contradictory.  Then &#172;A is tautologous, since negation changes every T to F in the truth table.  Hence, by AX2, P(&#172;A) = 1.  So, by Problem 10.3, P(A) = 0.  

 </P>  
<HR><HR><A NAME="868"></A><H2>868.  3.  0 <= P(A) <= 1 </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic Theorems :: 3.  0 <= P(A) <= 1 </I> ]</UL>
<H4>Proof</H4>                

   <P> By AX1, we know that 0 <= P(A).  We also know that 0 <= P(&#172;A).  Now by Problem 10.3, P(&#172;A) = 1 - P(A), and so 0 <= 1 - P(A), that is, P(A) <= 1.  This theorem summarizes the upper and lower bounds placed on probability values by the Kolmogorov axioms.  

 </P>  
<HR><HR><A NAME="869"></A><H2>869.  4.  If A and B are truth-functionally equivalent, then P(A) = P(B). </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic Theorems :: 4.  If A and B are truth-functionally equivalent, then P(A) = P(B). </I> ]</UL>
<H4>Proof</H4>                

   <P> Suppose A and B are truth-functionally equivalent.  Then A and B have the same truth conditions.  Hence A and &#172;B must always have opposite truth values, so that A and &#172;B are mutually exclusive.  Moreover, A v &#172;B is a tautology.  Hence by AX3 (substituting '&#172;B' for 'B'), P(A v &#172;B) = P(A) + P(&#172;B), and by AX2, P(A v &#172;B) = 1.  Thus P(A) + P(&#172;B) = 1.  Then, by Problem 10.3, P(A) + 1 - P(B) = 1, whence it follows that P(A) = P(B).  

 </P>  
<HR><HR><A NAME="870"></A><H2>870.  5.  P(A v B) = P(A) + P(B) - P(A ^ B) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic Theorems :: 5.  P(A v B) = P(A) + P(B) - P(A ^ B) </I> ]</UL>
<H4>Description</H4>                

   <P> This is a complex proof, which follows in parts.  

 </P>  
<HR><HR><A NAME="871"></A><H2>871.  Moreover, </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic Theorems :: 5.  P(A v B) = P(A) + P(B) - P(A ^ B) :: Moreover, </I> ]</UL>
from item (c) and Thm4 it follows that P(A v B) = P(((A ^ &#172;B) v (&#172;A ^ B)) v (A ^ B)), whence by item (f) and AX3 we obtain P(A v B) = P((A ^ &#172;B) v (&#172;A ^ B)) + P(A ^ B).  But then by item (g) and AX3 we see that:  

 
<HR><HR><A NAME="872"></A><H2>872.  That is, </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic Theorems :: 5.  P(A v B) = P(A) + P(B) - P(A ^ B) :: That is, </I> ]</UL>
if we simply add P(A) and P(B), where A and B are not mutually exclusive (i.e., where P(A ^ B) > 0), P(A ^ B) will be counted twice.  But as item (j) tells us, the correct value for P(A v B) is obtained by counting P(A ^ B) only once.  Now, subtracting the equation in item (k) from that in item (j), we obtain:  

 
<HR><HR><A NAME="873"></A><H2>873.  6.  P(A ^ B) <= P(A) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic Theorems :: 6.  P(A ^ B) <= P(A) </I> ]</UL>
<H4>Proof</H4>                

   <P> By item (h) in the proof of Thm 5,  P(A) = P(A ^ B) P(A ^ &#172;B).  But by Ax1, P(A ^ &#172;B) >= 0.  Since adding a nonnegative quantity to P(A ^ B) gives P(A), it must be that P(A ^ B) <= P(A).  Proof of the second conjunct is similar, except that item (i) from the proof of Thm 5 is used.  

 </P>  
<HR><HR><A NAME="874"></A><H2>874.  7.  P(A v B) >= P(A) and P(A v B) >= P(B) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic Theorems :: 7.  P(A v B) >= P(A) and P(A v B) >= P(B) </I> ]</UL>
<H4>Proof</H4>                

   <P> By Thm 5, P(A v B) = P(A) + P(B) - P(A ^ B) .  And by Problem Thm 6, P(B) - P(A ^ B) >= 0.  Hence, P(A v B) >= P(A).  Proof of the second conjunct is similar.  

 </P>  
<HR><HR><A NAME="875"></A><H2>875.  8.  If P(A) = P(B) = 0, then P(A v B) = 0 </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic Theorems :: 8.  If P(A) = P(B) = 0, then P(A v B) = 0 </I> ]</UL>
<H4>Proof</H4>                

   <P> Suppose P(A) = P(B) = 0.  then by Thm 6 and Ax1, P(A ^ B) = 0.  Hence, by Thm 5, P(A v B) = 0.  

 </P>  
<HR><HR><A NAME="876"></A><H2>876.  9.  If P(A) = 1, then P(A ^ B) = P(B) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic Theorems :: 9.  If P(A) = 1, then P(A ^ B) = P(B) </I> ]</UL>
<H4>Proof</H4>                

   <P> Suppose P(A) = 1.  Now by Thm 7, P(A v B) >= P(A), and by Thm 3, P(A v B) <= 1.  Hence P(A v B) = 1.  Furthermore, by Thm 5, P(A v B) = P(A) + P(B) - P(A ^ B), so that 1 = 1 + P(B) - P(A ^ B); that is, P(A ^ B) = P(B).  

 </P>  
<HR><HR><A NAME="877"></A><H2>877.  10.  If A is a truth-functional consequence of B, then P(A ^ B) = P(B) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic Theorems :: 10.  If A is a truth-functional consequence of B, then P(A ^ B) = P(B) </I> ]</UL>
<H4>Proof</H4>                

   <P> Suppose A is a truth-functional consequence of B.  Then A ^ B is truth-functionally equivalent to B, since A ^ B is true on any line of a truth table in which B is true and false on any line of a truth table on which B is false.  So by Thm 4, P(A ^ B) = P(B).  

 </P>  
<HR><HR><A NAME="878"></A><H2>878.  Conditional Probability (CP) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Conditional Probability (CP) </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> P(A|B), which is read:  "the probability of A, given B."

</LI> </UL> <H4>Semantics</H4>                

   <P> Conditional probability is the probability of one proposition (or event), given that another is true (or has occurred).    It is not to be confused with the probability of a conditional statement, P(B -> A), which plays little role in the probability theory.

</P>  <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> P(A | B)&#8797;P(A ^ B) / P(B)

</P>  <H4>Notes</H4><UL>            

   <LI> &#8797; means 'is by definition'

   </LI> <LI> If P(B) = 0, P(A | B) has no value.

   </LI> <LI> By the classical interpretation, this definition is easy to understand.  

 </LI> </UL> 
<HR><HR><A NAME="879"></A><H2>879.  C1.  P(A | A) = 1 </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic CP Theorems :: C1.  P(A | A) = 1 </I> ]</UL>
<H4>Proof</H4>                

   <P> A ^ A is truth-functionally equivalent to A.  Thus, by Thm4, P(A | A) = P(A ^ A) / P(A) = P(A) / P(A) = 1.  

 </P>  
<HR><HR><A NAME="880"></A><H2>880.  C2.  P(&#172;A | A) = 0 </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic CP Theorems :: C2.  P(&#172;A | A) = 0 </I> ]</UL>
<H4>Proof</H4>                

   <P> &#172;A ^ A is contradictory, and so by Thm 2, P(&#172;A ^ A) = 0.  Thus, by Def Cond Prob P(&#172;A | A) = P(&#172;A ^ A) / P(A) = 0 / P(A) = 0.  

 </P>  
<HR><HR><A NAME="881"></A><H2>881.  C3.  If B is Taut, P(A | B) = P(A) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic CP Theorems :: C3.  If B is Taut, P(A | B) = P(A) </I> ]</UL>
<H4>Proof</H4>                

   <P> Suppose B is tautologous.  Then by Ax2, P(B) = 1.  Moreover, A ^ B is truth-functionally equivalent to A, so that by Thm4, P(A ^ B) = P(A).  Thus, P(A | B) = P(A ^ B) / P(B) = P(A) / 1 = P(A).  

 </P>  
<HR><HR><A NAME="882"></A><H2>882.  C4.  If A and B are truth-functionally equivalent, then P(A | C) = P(B | C) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic CP Theorems :: C4.  If A and B are truth-functionally equivalent, then P(A | C) = P(B | C) </I> ]</UL>
<H4>Proof</H4>                

   <P> Suppose A and B are truth-functionally equivalent.  then so are A ^ C and B ^ C.  Hence by Thm4, P(A ^ C ) = P(B ^ C).  But then, by Def of Cond Prob,  P(A | C) = P(A ^ C) / P(C) = P(B ^ C) / P(C) = P(B | C).  

 </P>  
<HR><HR><A NAME="883"></A><H2>883.  C5.  If A and B are truth-functionally equivalent, then P(C | A) = P(C | B). </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic CP Theorems :: C5.  If A and B are truth-functionally equivalent, then P(C | A) = P(C | B). </I> ]</UL>
<H4>Proof</H4>                

   <P> Suppose A and B are truth-functionally equivalent.  Then, by Thm 4, P(A) = P(B).  Moreover, by the reasoning of Thm C4, P(C ^ A) = P(C ^ B).  Hence, by Def of Cond Prob,  P(C | A) = P(C ^ A) / P(A) = P(C ^ B) / P(B) = P(C | B).  

 </P>  
<HR><HR><A NAME="884"></A><H2>884.  C6.  P(A ^ B) = P(A ) * P(B | A) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic CP Theorems :: C6.  P(A ^ B) = P(A ) * P(B | A) </I> ]</UL>
<H4>Proof</H4>                

   <P> Def of Cond Prob gives P(B | A) = P(B ^ A) / P(A).  Hence, since B ^ A is truth-functionally equivalent to A ^ B, by Thm 4 we have P(B | A) = P(A ^ B) / P(A).  Multiplying both sides of this equation by P(A) yields the theorem.  

 </P>  
<HR><HR><A NAME="885"></A><H2>885.  C7.  P(A) * P(B | A) = P(B) * P(A | B) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Basic CP Theorems :: C7.  P(A) * P(B | A) = P(B) * P(A | B) </I> ]</UL>
<H4>Proof</H4>                

   <P> By Thm C6, P(A ^ B) = P(A) * P(B | A) and also P(B ^ A) = P(B) * P(A | B).  But A ^ B is truth-functionally equivalent to B ^ A.  Hence, by Thm 4, P(A ^ B) = P(B ^ A).  This proves the theorem, which shows that the order of conjuncts is of no logical importance.  

 </P>  
<HR><HR><A NAME="886"></A><H2>886.  CP Independence & Related Theorems </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: CP Independence & Related Theorems </I> ]</UL>
<H4>Description</H4>                

   <P> If A and B are independent events, P(A|B) = P(A).  That is, P(B) has no influence on P(A).

   </P> <P> A is independent of B if and only if B is independent of A.  

 </P>  
<HR><HR><A NAME="887"></A><H2>887.  C8.  P(A | B ) = P(A) iff P(B | A) = P(B) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: CP Independence & Related Theorems :: C8.  P(A | B ) = P(A) iff P(B | A) = P(B) </I> ]</UL>
<H4>Proof</H4>                

   <P> By Def of Cond Prob, P(A | B) = P(A) iff P(A) = P(A ^ B)/P(B).  Now multiplying both sides of this equation by P(B)/P(A) gives P(B) = P(A ^ B)/P(A), which (since A ^ B is truth-functionally equivalent to B ^ A) is true iff P(B) = P(B ^ A)/P(A); hence by def of cond prob, P(B) = P(B | A).  

 </P>  
<HR><HR><A NAME="888"></A><H2>888.  C9.  If A and B are independent, then P(A ^ B) = P(A) * P(B) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: CP Independence & Related Theorems :: C9.  If A and B are independent, then P(A ^ B) = P(A) * P(B) </I> ]</UL>
<H4>Proof</H4>                

   <P>this follows immediately from Thm C6 and the definition of independence.  

 </P>  
<HR><HR><A NAME="889"></A><H2>889.  Simple Form  C10.  P(A | B) = (P(A) * P(B | A)) / P(B) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: CP Bayes' Theorems :: Simple Form  C10.  P(A | B) = (P(A) * P(B | A)) / P(B) </I> ]</UL>
<H4>Proof</H4>                

   <P> This follows immediately from Thm C7.  

 </P>  
<HR><HR><A NAME="890"></A><H2>890.  Exhaustive Series of Propositions or Events </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: CP Bayes' Theorems :: Exhaustive Series of Propositions or Events </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4><OL>            

   <LI> A series of propositions or events A1, A2, ..., An is exhaustive if P(A1 v A2 v ... v An) = 1

</LI> </OL> <H4>Examples</H4><UL>            

   <LI> The series A1, A2, ..., A6, representing the six possible outcomes of a single toss of a single die is exhaustive, under the classical interpretation.

   </LI> <LI> A series of the form A ^ B, A ^ -B, -A ^ B, -A ^ -B is exhaustive, since the disjunction of them is a tuatology, which is 1 by Ax 2.  

 </LI> </UL> 
<HR><HR><A NAME="891"></A><H2>891.  Pairwise Mutually Exclusive Series </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: CP Bayes' Theorems :: Pairwise Mutually Exclusive Series </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4><OL>            

   <LI> A series of propsotions or events A1, A2, ..., An is pairwise mutually exclusive if for each pair (Ai, Aj) of its members, P(Ai ^ Aj) = 0.

</LI> </OL> <H4>Examples</H4>                

   <P> The exhaustive series A1, A2, ..., A6 for the six outcomes of a die roll is pairwise mutually exclusive, since only one outcome may result from a single toss.  

 </P>  
<HR><HR><A NAME="892"></A><H2>892.  a.  P((A 1 v A2 v ... v An) ^ B) = P(B) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: CP Bayes' Theorems :: Important Lemma:  C11.  If A1, A2, ..., An is a pairwise mutually exclusive and exhaustive series, then P(B) = P(A1 ^ B) + P(A2 ^ B) + ... + P(An ^ B). :: a.  P((A 1 v A2 v ... v An) ^ B) = P(B) </I> ]</UL>
Now as can be seen by repeated application of the distributive law of propositional logic, (A1 v A2 v ... v An) ^ B is truth-functionally equivalent to (A1 ^ B) v (A2 ^ B) v ... v (An ^ B).  Thus applying Thm 4 to item (a) we get (b) below.  

 
<HR><HR><A NAME="893"></A><H2>893.  b.  P(B) = P((A1 ^ B) v (A2 ^ B) v ... v (An ^ B)) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: CP Bayes' Theorems :: Important Lemma:  C11.  If A1, A2, ..., An is a pairwise mutually exclusive and exhaustive series, then P(B) = P(A1 ^ B) + P(A2 ^ B) + ... + P(An ^ B). :: b.  P(B) = P((A1 ^ B) v (A2 ^ B) v ... v (An ^ B)) </I> ]</UL>
Moreover, we get (c) below  

 
<HR><HR><A NAME="894"></A><H2>894.  c.  The series (A1 ^ B), (A2 ^ B), ..., (An ^ B) is pairwise mutually exclusive. </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: CP Bayes' Theorems :: Important Lemma:  C11.  If A1, A2, ..., An is a pairwise mutually exclusive and exhaustive series, then P(B) = P(A1 ^ B) + P(A2 ^ B) + ... + P(An ^ B). :: c.  The series (A1 ^ B), (A2 ^ B), ..., (An ^ B) is pairwise mutually exclusive. </I> ]</UL>
For consider any two members (Ai ^ B), (Aj ^ B) of this series.  Since the series A1, A2, ..., An is pairwise mutually exclusive, we know that P(Ai ^ Aj) = 0.  Hence, by Thm 6 and Ax1, P(Ai ^ Aj ^ B) = 0.  So, by Thm 4, P((Ai ^ B) ^ (Aj ^ B)) = 0.  

 
<HR><HR><A NAME="895"></A><H2>895.  d.  Now from (c), it follows that for each i such that 1 <= i < n,  P(((A1 ^ B) v (A2 ^ B) v ... v (Ai ^ B)) ^ (Ai+1 ^ B)) = 0 </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: CP Bayes' Theorems :: Important Lemma:  C11.  If A1, A2, ..., An is a pairwise mutually exclusive and exhaustive series, then P(B) = P(A1 ^ B) + P(A2 ^ B) + ... + P(An ^ B). :: d.  Now from (c), it follows that for each i such that 1 <= i < n,  P(((A1 ^ B) v (A2 ^ B) v ... v (Ai ^ B)) ^ (Ai+1 ^ B)) = 0 </I> ]</UL>
For again, by repeated application of the distributive law, ((A1 ^ B) v (A2 ^ B) v ... v (Ai ^ B)) ^ (Ai+1 ^ B) is truth-functionally equivalent to ((A1 ^ B) ^ (Ai+1 ^ B)) v ((A2 ^ B) ^ (Ai+1 ^ B)) v ... v ((Ai ^ B) ^ (Ai+1 ^ B)).  But by item (c), the probability of each of the disjuncts of this latter formula is zero; and repeated appliction of Thm 8 implies that any disjunction whose disjuncts all have probability zero must itself have probability zero.  Hence item (d) follows by Thm 4.

The theorem then follows from formula (b) by repeated application of formula (d) and Thm 5.  To see this, suppose for the sake of concreteness that n = 3.  Then formula (b) will read

	b'   P(B) = P((A1 ^ B) v (A2 ^ B) v (A3 ^ B))

	and we will have these two instances of formula (d):

	d'   P(((A1 ^ B) v (A2 ^ B)) ^ (A3 ^ B)) = 0

	d''   P((A1 ^ B) ^ (A2 ^ B)) = 0  

 
<HR><HR><A NAME="896"></A><H2>896.  C13.  P(&#172;A | B) = 1 - P(A | B) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: CP Theorems with &#172;, ^ and v :: C13.  P(&#172;A | B) = 1 - P(A | B) </I> ]</UL>
<H4>Proof</H4>                

<PRE>
By the Def of Cond Prob P(A v &#172;A | B) = P((A v &#172;A) ^ B) / P(B).  But (A v &#172;A) ^ B is truth-functionally equivalent to B.  Hence, by Thm 4, P((A v &#172;A) ^ B) = P(B), so that P(A v &#172;A | B) = P(B) / P(B) = 1.  Moreover, (A v &#172;A) ^ B is also truth-functionally equivalent to (A ^ B) v (&#172;A ^ B), so that by Thm 4,

	P(A v &#172;A | B) = P((A ^ B) v (&#172;A ^ B)) / P(B) = 1

Now A ^ B and &#172;A ^ B are mutually exclusive, so that by Ax 3,

	(P(A ^ B) + P(&#172;A ^ B)) / P(B) 

	= P(A ^ B) / P(B) + P(&#172;A ^ B) / P(B) = 1

So, by Def of Cond Prob,

	P(A | B) + P(&#172;A | B) = 1;

that is,

	P(&#172;A | B) = 1 - P(A | B).
</PRE>  

  
<HR><HR><A NAME="897"></A><H2>897.  C14.  P(A v B | C) = P(A | C) + P(B | C) - P(A ^ B | C) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: CP Theorems with &#172;, ^ and v :: C14.  P(A v B | C) = P(A | C) + P(B | C) - P(A ^ B | C) </I> ]</UL>
<H4>Proof</H4>                
<PRE>
By Def of Cond Prob,

	P(A v B | C) = P((A v B) ^ C) / P(C)

But since (A v B) ^ C is truth-functionally equivalent to (A ^ C) v (B ^ C), we have, by Thm 4

	P(A v B | C) = P((A ^ C) v (B ^ C)) / P(C)

By Thm 5, this is equal to

	(P(A ^ C) + P(B ^ C) - P((A ^ C) ^ (B ^ C)))  /  P(C)

and since (A ^ C) ^ (B ^ C) is truth-functionally equivalent to (A ^ B) ^ C, by Thm 4 this in turn is equal to 

	P(A ^ C) / P(C)  +  P(B ^ C) / P(C)  -  P((A ^ B) ^ C) / P(C)

Which reduces by Def of Cond Prob to

	P(A | C) + P(B | C) - P(A ^ B | C)
</PRE>  

  
<HR><HR><A NAME="898"></A><H2>898.  C15.  P(A ^ B | C) = P(A | C) * P(B | A ^ C) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: CP Theorems with &#172;, ^ and v :: C15.  P(A ^ B | C) = P(A | C) * P(B | A ^ C) </I> ]</UL>
<H4>Proof</H4>                
<PRE>
By the Def of Cond Prob

	P(A ^ B | C) = P((A ^ B) ^ C) / P(C)

which by Thm 4 is equal to P((A ^ C) ^ B) / P(C).  By Thm C5, this becomes

	(P(A ^ C) * P(B | A ^ C))  /  P(C)

which reduces by the Def of Cond Prob to P(A | C) * P(B | A ^ C)
</PRE>  

  
<HR><HR><A NAME="899"></A><H2>899.  Thm 11.  P(A -> B) = P(&#172;A) + P(A) * P(B | A) </H2><UL>[ <I> Inference :: Induction :: Probabilities :: Probability Calculus :: Kolmogorov's Probability Theory (Varzi) :: Thm 11.  P(A -> B) = P(&#172;A) + P(A) * P(B | A) </I> ]</UL>
<H4>Proof</H4>                
<PRE>
A -> B is truth-functionally equivalent to &#172;(A ^ &#172;B), so that by Thm 4

	P(A -> B) = P(&#172;(A ^ &#172;B))

But by Thm 1, P(&#172;(A ^ &#172;B)) = 1 - P(A ^ &#172;B), and since, by Problem Thm C6, P(A ^ &#172;B) = P(A) * P(&#172;B | A), we have

	P(A -> B) = 1 - (P(A) * P(&#172;B | A))

Now by Thm C13, P(&#172;B | A) = 1 - P(B | A); hence,

	P(A -> B) = 1 - (P(A) * (1 - P(B | A)))

so that:

	P(A -> b) = 1 - (P(A) - P(A) * P(B | A))

i.e.

	P(A -> B) = 1 - P(A) + P(A) * P(B | A)

But by Thm 1, 1 - P(A) = P(&#172;A), and so

	P(A -> B) = P(&#172;A) + P(A) * P(B | A).
</PRE>  

  
<HR><HR><A NAME="900"></A><H2>900.  Sufficient Condition </H2><UL>[ <I> Inference :: Induction :: Causal Reasoning :: Necessary and Sufficient Conditions :: Sufficient Condition </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4><OL>            

   <LI> A sufficient condition is one of multiple possible causes.  Whenever an event occurs at least one sufficient condition is present.  The sufficient condition is the conjunction of the necessary conditions.

</LI> </OL> <H4>Examples</H4><UL>            

   <LI> Electrocution is sufficient to produce death.   (There are also other sufficient conditions such as poisoning, drowning, etc.)  

 </LI> </UL> 
<HR><HR><A NAME="901"></A><H2>901.  Necessary Condition </H2><UL>[ <I> Inference :: Induction :: Causal Reasoning :: Necessary and Sufficient Conditions :: Necessary Condition </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4><OL>            

   <LI> A necessary condition is a condition that is required to exist to cause an event, however, it may not be the only requirement.  Whenever an event occurs all necessary conditions are present.  The conjunction of the necessary conditions is the sufficienct condition.

</LI> </OL> <H4>Examples</H4><UL>            

   <LI> The presence of clouds is a necessary condition for rain.  (Clouds are just one of several necessary conditions for rain, others include temperature and air pressure.)  

 </LI> </UL> 
<HR><HR><A NAME="902"></A><H2>902.  Necessary and Sufficient </H2><UL>[ <I> Inference :: Induction :: Causal Reasoning :: Necessary and Sufficient Conditions :: Necessary and Sufficient </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4><OL>            

   <LI> Necessary and sufficient means that nothing more and nothing less is needed to cause something.

</LI> </OL> <H4>Examples</H4><UL>            

   <LI> The action of a force causes a body to accelerate.

   </LI> <LI> An increase in voltage causes an increase in electrical current.  

 </LI> </UL> 
<HR><HR><A NAME="903"></A><H2>903.  Mill's Methods </H2><UL>[ <I> Inference :: Induction :: Causal Reasoning :: Mill's Methods </I> ]</UL>
<H4>Description</H4>                

   <P> These are the methods of identifying causal connections between events as compiled by John Stuart Mill.

   </P> <P> Four of Mill's methods are usually applied using a table.  Down the left side of the table are listed each of the occurrences.  Across the top of the table are listed all the necessary and or sufficient conditions.  The conditions are followed by a column for the phenomenon (the result of effect of the cause).  The body of the table is then marked up with stars '*' and dashes '-'.  If a particular condition was present for a particular occurrence, place a star in the proper location on the table.  otherwise place a dash.

</P>  <H4>Examples</H4><UL>            

   <LI> Four people go out to dinner and later that evening, all are feeling ill.  They guess it may be food poisoning and draw up a table to determine which food item (A, B, C or D) caused the poisoning.

<PRE>
occurence	conditions		Phenomenon
(person)		A     B     C     D	Sickness

1		*     *     *     -	*
2		-     *     *     *	*
3		*     -     *     -	*
4		-     *     *     *	*
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="904"></A><H2>904.  Method of Agreement </H2><UL>[ <I> Inference :: Induction :: Causal Reasoning :: Mill's Methods :: Method of Agreement </I> ]</UL>
<H4>Description</H4>                

   <P> This method of agreement identifies a cause in the sense of a necessary condition.  Thus, is allows the elimination of conditions that are not necessary for the occurrence of the phenomenon.

   </P> <P> Using the table method, the list of conditions should be labeled <I>Possible Necessary Conditions</I>.  By the method of agreement, we are entitled to eliminate any condition that may be absent when the phenomenon occurrs.  Since, this would indicate that that this condition is not necessary for the phenomenon to exist.

</P>  <H4>Cautions</H4><UL>            

   <LI> The conclusion follows only probably for two reasons.  First, it is quite possible that some condition was overlooked in compiling conditions A through G.  For example, if the ice cream was served with contaminated spoons, then the sickness of the diners could have been caused by that condition and not the ice cream.  Second, if more than one of the foods were contaminated then the sickness could have been caused by this combination of foods and not by one food alone.  Thus, the strength of the argument depends on the nonoccurrence of these two possibilities.

   </LI> <LI> Also, the conclusion does not assert that any occurrence which includes a particular condition will result in the phenomenon in question.  What the conclusion says is that those occurrences which do not involve the condition in question will not experience the phenomenon.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> Given the example from <I>Mill's Methods</I>, conditions A, B and D can be eliminated meaning that C was a likely necessary condition and thus a possible cause.  

 </LI> </UL> 
<HR><HR><A NAME="905"></A><H2>905.  Method of Difference </H2><UL>[ <I> Inference :: Induction :: Causal Reasoning :: Mill's Methods :: Method of Difference </I> ]</UL>
<H4>Description</H4>                

   <P> The <B>method of differnce</B> consists in a systematic effort to identify a single factor that is present in an occurrence in which the phenomenon in question is present, and absent from an occurrence in which the phenomenon is absent.  The method is confined to investigating exactly two occurrences, and it identifies a cause in the sense of a sufficient condition.

   </P> <P> As with the method of agreement, we proceed to eliminate certain conditions, but in this case we use the rule that a condition is not sufficient for the occurrence of a phenomenon if it is present when the pheomenon is absent.

</P>  <H4>Cautions</H4><UL>            

   <LI> The conclusion yielded by the method of difference is only probable, however, even for the one occurrence to which it directly applies.  The problem is that it is impossible for two occurrences to be literally identical in every respect  ut one.  The mere fact that two occurrences occupy different regions of space, that one is closer to the wall than the other, amouts to a difference.  Such differences may be insignificant, but therein lies the possibility for error.  It is not at all obvious how insignificant differences should be distinguished from significant ones.  Furthermore, it is impossible to make an exhaustive list of all the possible conditions; but without such a list there is no assurance that significant conditions have not been overlooked.

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> A pair of twins, Jane (1) and Jan (2), have dinner in a restaurant.  The twins have identical susceptibilities to food poisoning (A).  Jane orders soup (B), salad (C), chicken (D), carrots (E), rice (F) and ice cream (G).  Jan orders soup (B), salad (C), chicken (D), carrots (E), rice (F) and no ice cream (G).

   </LI> <LI> In this example, with the table drawn up, we must eliminate conditions A, B, C, D, E and F.  This leaves only G as the sufficient condition for the phenomenon.  Thus, G (ice cream) is the cause of Jan's sickness.  

 </LI> </UL> 
<HR><HR><A NAME="906"></A><H2>906.  Joint Metnod of Agreement and Difference </H2><UL>[ <I> Inference :: Induction :: Causal Reasoning :: Mill's Methods :: Joint Metnod of Agreement and Difference </I> ]</UL>
<H4>Description</H4>                

   <P> The <B>joint method of agreement and difference</B> consists of a systematic effort to identify a single condition that is present in two or more occurrences in which the phenomenon in question is present and that is absent from two or more occurrences in which the phenomenon is absent.  Ths condition is then taken to be the cause of the phenomenon in the sense of a necessary and sufficient condition.  Since the joint method yields a cause in the sense of both a necessary and sufficient condition, it is usually thought to be stronger than either of the previous two methods.

</P>  <H4>Cautions</H4><UL>            

</UL> <H4>Examples</H4><UL>            

   <LI> Six people eat dinner in a restaurant each having the following:

<PRE>
(1) Liz	soup (B), hamburger (C), ice cream (E), french fries (F), mixed vegetables (G)
(2) Tom	salad (A), soup (B), fish (D), mixed vegetables (G), ice cream (E)
(3) Andy	salad (A), hamburger (C), french fries (F), ice cream (E)
(4) Sue	french fries (F), hamburger (C), salad (A)
(5) Meg	fish (D), mixed vegetables (G)
(6) Bill	french fries (F), hamburger (C), soup (B)
</PRE>

</LI> </UL> <H4></H4>                
   <P> Later, Liz, Tom and Andy get sick.

   </P> <P> Using the rule that a condition is not necessary if it is absent when the phenomenon is present, occurrence 1 eliminates A and D, occurrence 2 eliminates C and F, and occurrence 3 eliminates B, D and G.  This lesaves only E as the necessary condition.

   </P> <P> Next, in the last three occurrences the phenomenon is absent, so we use the rule that a condition is not sufficient if it is present when the phenomenon is absent.  Occurrence 4 eliminates A, C and F; occurrence 5 eliminates D and G; and occurrence 6 eliminates B, C and F.  This leaves only E as the cufficient condition.

   </P> <P> Thus, condition E is the cuase in the sense of both a necessary and sufficient condition of the sickness of the first three diners.  

 </P>  
<HR><HR><A NAME="907"></A><H2>907.  Metalogic (Nolt) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) </I> ]</UL>



<H4>Alternate Names</H4><UL>            

   <LI> Metalogic

</LI> </UL> <H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Metatheory is the logical sstudy of formal logical systems.  Metatheory studies semantics, and what it means for a logical system to be without error.  

 </P>  
<HR><HR><A NAME="908"></A><H2>908.  Definitions </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Definitions </I> ]</UL>
<H4>See Also</H4><UL>            

   <LI> Definition of <A HREF="#907" TARGET="baseframe">metatheory</A>.

</LI> </UL> <H4>Description</H4>                

   <P> The fundamental concepts of metalogic are the definitions.  Most reasoning in metalogic is done from these definitions.  For propositional logic, the usual definitions are linked below.

</P>  <H4>Definitions of Propositional Logic</H4><UL>            

   <LI> <A HREF="#393" TARGET="baseframe">Formation rules</A> of Propositional Logic.

   </LI> <LI> Definition of <A HREF="#402" TARGET="baseframe">valuation</A>

   </LI> <LI> Definition of <A HREF="#419" TARGET="baseframe">counter example</A>

   </LI> <LI> Definition of <A HREF="#418" TARGET="baseframe">valid argument</A>

   </LI> <LI> Definition of <A HREF="#417" TARGET="baseframe">invalid argument</A>

   </LI> <LI> Definition of <A HREF="#412" TARGET="baseframe">valid wff</A>

   </LI> <LI> Definition of <A HREF="#413" TARGET="baseframe">tautology</A>

   </LI> <LI> Definition of <A HREF="#409" TARGET="baseframe">inconsistent</A>

   </LI> <LI> Definition of <A HREF="#411" TARGET="baseframe">consistent</A>

   </LI> <LI> Definition of <A HREF="#415" TARGET="baseframe">contingent</A>

   </LI> <LI> Definition of <A HREF="#416" TARGET="baseframe">equivalent</A>

</LI> </UL> <H4>Definitions of Truth-Trees</H4><UL>            

   <LI> Definition of <A HREF="#455" TARGET="baseframe">path</A>

   </LI> <LI> Definition of <A HREF="#456" TARGET="baseframe">finished</A>

   </LI> <LI> Definition of <A HREF="#457" TARGET="baseframe">open path</A>

   </LI> <LI> Definition of <A HREF="#458" TARGET="baseframe">closed path</A>

   </LI> <LI> Definition of <A HREF="#459" TARGET="baseframe">occurence</A>  

 </LI> </UL> 
<HR><HR><A NAME="909"></A><H2>909.  &#172;&#172;P is a wff. </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Direct Proof :: &#172;&#172;P is a wff. </I> ]</UL>
<DIV CLASS="GRAYBLOCK">
<H4>Metatheorem</H4>                

   <P> '&#172;&#172;P' is a wff.

</P>  <H4>Proof</H4>                

   <P> By wff formation rule 1, 'P' is a wff, whence it follows by rule 2 that '&#172;P' is a wff, and angain by rule 2 that '&#172;&#172;P' is a formula.  QED
</P>  

  </DIV> 
<HR><HR><A NAME="910"></A><H2>910.  'P &#8594; &#172;P' is consistent. </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Direct Proof :: 'P &#8594; &#172;P' is consistent. </I> ]</UL>
<H4>Metatheorem</H4>                

   <P> 'P &#8594; &#172;P' is consistent.

</P>  <H4>Proof</H4>                

   <P> Since (by the definition of a valuation), a valuation is simply an assignment of one of the valuates T or F to the proposition letters of a wff, there is a valuation <I><B>V</B></I> of the formula 'P' such that <I><B>V</B></I>( 'P' ) = F and hence <I><B>V</B></I>( 'P' ) =\ T.  By the valuation rule for the conditionals, if <I><B>V</B></I>( 'P' )  =\ T, then <I><B>V</B></I>( 'P &#8594; &#172;P' ) = T.  Hence there is a valuation (namely <I><B>V</B></I>) on which 'P &#8594; &#172;P' is true.  It follows by the definition of consistency) that 'P &#8594; &#172;P' is consistent. QED  

 </P>  
<HR><HR><A NAME="911"></A><H2>911.  all wffs are bivalent </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Direct Proof :: all wffs are bivalent </I> ]</UL>
<H4>Metatheorem</H4>                

   <P> Each formula of propositional logic is either true of fase, but not both on each of its valuations.

</P>  <H4>Proof</H4>                

   <P> Consider any wff &#934; of propositional logic and any valuation <I><B>V</B></I> of &#934;.  Since &#934; is a formula, &#934; is either atomic or complex.  If &#934; is atomic, then the definition for a valuation stipulates that <I><B>V</B></I> assigns it one, but not both, of the valuations T or F.  If &#934; is complex, then by wff formation rules 2 and 3 it must have one of five forms:  &#172;&#934;, (&#934; &#8743; &#936;), (&#934; &#8744; &#936;), (&#934; .-> &#936;), or (&#934; &#8596; &#936;).  Now the valuation rule for each of these forms stipulates that <I><B>V</B></I> assigns &#934; the value F iff <I><B>V</B></I> does not assign &#934; the value T.  No matter whether &#934; is atomic or complex, then, <I><B>V</B></I> assigns to &#934; one, but not both, of the valuates T or F.  QED  

 </P>  
<HR><HR><A NAME="912"></A><H2>912.  Iff a set of premises of a valid sequent is consistent, then so is the conclusion </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Direct Proof :: Iff a set of premises of a valid sequent is consistent, then so is the conclusion </I> ]</UL>
<H4>Metatheorem</H4>                

   <P> If the set of premises of a valid sequent is consistent, then so is the conclusion.

</P>  <H4>Proof</H4>                

   <P> <UL> Suppose (for conditional proof) that the set &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB> of premises of some valid sequent with conclusion &#936; is consistent.  Then (by the definition of consistency for a set) there is at least one valuation <I><B>V</B></I> on which &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB> are all true.  But (by the definition of validity) there is no valuation on which &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB> are all true and &#936; is not true.  Thus &#936; is not untrue
   </UL>

   on <I><B>V</B></I> and so must be true on <I><B>V</B></I>  Hence (by the definition of consistency) &#934; is consistent.  Therefore, if the set of premises of a valid sequent is consistent, then so is the conclusion.  QED  

 </P>  
<HR><HR><A NAME="913"></A><H2>913.  Conditional Proof </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Conditional Proof </I> ]</UL>
<H4>Description</H4>                

   <P> When the coclusion takes the form of a conditional statement, it's usually easiest to form a proof using the Conditional Proof technique.  This involves hypothesizing (or supposing) the antecedent and attempting to derive the consequent.  This is all done as a hypothetical proof and so it's all indented.  The conclusion is the conditional being proven.

   </P> <P> Most hypothetical proofs for conditional arguments follow a common pattern:<BR>
<PRE>
      Unpacking -- Logical Manipulation -- Repacking
</PRE>

   </P> <P> Unpacking means relpacing the terms given in the metatheorem (in this case its antededent) with their definitions.

   </P> <P> Logical Manipulation means deductive reasoning.  The goal of this reason should to be end in a proposition which is the definition of some term, (in this case the consequent).

   </P> <P> Repacking means replacing the deduced proposition with the term it defines.

  

 </P>  
<HR><HR><A NAME="914"></A><H2>914.  'valid sequent' implies 'consistent conclusion' </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Conditional Proof :: 'valid sequent' implies 'consistent conclusion' </I> ]</UL>
<H4>Metatheorem</H4>                

   <P> If the set of premises of a valid sequent is consistent, then so is the conclusion.

</P>  <H4>Proof</H4>                

<UL>Suppose (for conditional proof) that the set  &#934;<SUB>1</SUB>, ..., .PHI<SUB>n</SUB> of premises of some valid sequent with conclusion &#934; is consistent.  Then (by <A HREF="#411" TARGET="baseframe">definition of consistency</A> for a set) there is at least one valuation <I><B>V</B></I> on which &#934;<SUB>1</SUB>, ..., .PHI<SUB>n</SUB> are all true.  But (by the <A HREF="#418" TARGET="baseframe">definition of validity</A>) there is no valuation on which &#934;<SUB>1</SUB>, ..., .PHI<SUB>n</SUB> are all true and &#936; is not true.  Thus &#936; is not untrue on <I><B>V</B></I> and so must be true on <I><B>V</B></I>.  Hence (be the <A HREF="#411" TARGET="baseframe">definition of consistency</A>) &#936; is consistent.</UL>

Therefore, if the set of premises of a valid sequent is consistent, then so is the conclusion.  QED  

  
<HR><HR><A NAME="915"></A><H2>915.  'sequent is valid' iff 'premises and negated conclusion are inconsistent'. </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Conditional Proof :: 'sequent is valid' iff 'premises and negated conclusion are inconsistent'. </I> ]</UL>
<H4>Metatheorem</H4>                

   <P> A sequent is valid if and only if the set containing its premises and the negation of its conclusion is inconsistent.

</P>  <H4>Proof</H4>                

   <P> <UL> Suppose (for conditional proof) that &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB>  &#8870;  &#936; is a valid sequent.  Then (by the <A HREF="#418" TARGET="baseframe">definition of validity</A>) there is no valuation in which its premises &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB> are all true and its conclusion &#936; is not true.  From the second conjunct it follows (by <A HREF="#403" TARGET="baseframe">negation valuation rule 1</A>) that there is no valuation on which &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB> and &#172;&#936; are all true -- that is, (by the definition of <A HREF="#411" TARGET="baseframe">consistency</A>) that the set { &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB>, &#172;&#936; } is inconsistent.
</UL>

   </P> <P> Hence we have shown that if a sequent is valid, then the set consisting of its premises and the negation if its conclusion is inconsistent.

   </P> <P> <UL> Now suppose (again for conditional proof) that the set { &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB>, &#172;&#936; } is <A HREF="#409" TARGET="baseframe">inconsistent</A>.  This means that there is no valuation on which &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB> and &#172;&#936; are all true.  Hence from the second conjunct (by <A HREF="#403" TARGET="baseframe">negation valuation rule 1</A>) there is no valuation on which &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB> are true and &#936; is not true, which (being the definition of a <A HREF="#418" TARGET="baseframe">valid argument</A>) is to say that the sequent &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB> &#8870; &#936; is valid.
</UL>

   </P> <P> Thus, if the set containing a sequent's premises and the negation of its conslusion is inconsisten, then that sequent is valid.  In summare, we have shown that a sequent is valid <I>if and only if</I> the set containing its premises and the negation of its conclusion is inconsistent.  QED  

 </P>  
<HR><HR><A NAME="916"></A><H2>916.  Reductio Ad Absurdum </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Reductio Ad Absurdum </I> ]</UL>
<H4>Description</H4>                

   <P> This is simply negation introduction.  

 </P>  
<HR><HR><A NAME="917"></A><H2>917.  There is no invalid sequent with inconsistent premises </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Reductio Ad Absurdum :: There is no invalid sequent with inconsistent premises </I> ]</UL>
<H4>Metatheorem</H4>                

   <P> There is no invalid sequent with an inconsistent set of premises.

</P>  <H4>Proof</H4>                

   <P> <UL> Suppose for reductio that there is an invlid sequent with an inconsistent premise set { &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB> }.  Since the sequent is invalid, there is (by the definition of invalidity) some valuation <I><B>V</B></I> on which &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB> are all true and the argument's conclusion is not true.  But since &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB> are true on <I><B>V</B></I>, { &#934;<SUB>1</SUB>, ..., &#934;<SUB>n</SUB> } is consistent (by the definition of consistency for a set), which contradict our supposition.
</UL>

   </P> <P> Consequently, there is no invalid argument with an inconsistent set of premises.  QED  

 </P>  
<HR><HR><A NAME="918"></A><H2>918.  Mixed Strategies </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Mixed Strategies </I> ]</UL>
<H4>Description</H4>                

   <P> It's more common to mix strategies.  For example to prove a conditional, you may begin by a subproof for conditional proof, hypothesizing the antecedent.  The next step of the proof will then often be to hypothesize a negated version of the consequent for reductio.  

 </P>  
<HR><HR><A NAME="919"></A><H2>919.  Example </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Mixed Strategies :: Example </I> ]</UL>
<H4>Metatheorem</H4>                

   <P> If the conclusion of one valid sequent is &#934; and the conclusion of a second valid sequent is &#172;&#934;, then the set consisting of all the premises of both sequents is inconsistent.

</P>  <H4>Proof</H4>                

   <P> <UL>Suppose for conditional proof that the conclusion of one valid sequent is &#934; and the conclusion of a second valid sequent is &#172;&#934;.

               <UL>Now suppose for reductio that the set consisting of all the premises of both sequents is consistent.  That is (by the definition of consistency for sets), there is some valuation <I><B>V</B></I> which makes each member of this set true.  Then all the premises of both sequents are true on <I><B>V</B></I>; and, since both sequents are valid, it follows by the definition of validity that neither the conclusion &#934; nor the conclusion &#172;&#934; is untrue on <I><B>V</B></I>.  Therefore both <I><B>V</B></I>(&#934;)=T and <I><B>V</B></I>(&#172;&#934;) &#8797; T, and so we have a contradiction.</UL>

   Thus, contrary to our reductio supposition, the set consisting of all the premises of both sequents is inconsistent.</UL>

   </P> <P> So, if the conclusion of one valid sequent is &#934; and the conclusion of a second valid sequent is &#172;&#934;, then the set consisting of all the premises of both sequents is inconsistent.  QED  

 </P>  
<HR><HR><A NAME="920"></A><H2>920.  Mathematical Induction </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Mathematical Induction </I> ]</UL>
<H4>Description</H4>                

   <P> Mathematical induction (which is actually a form of deduction) is a technique for reasoning about a <B>series</B> of things, called <B>members</B>.  By series it is specifically meant that the members are ordered, and that there is a first member.  Thus, it is possible to <I>nth</I> member (i.e. 1st, 2nd, 3rd, 4th, ...).

   </P> <P> Specifically, mathematical induction is used to prove that some P is true of all members of the series.  The proof technique is fairly simple.  It requires the proof of two cases.  The first case, called the <B>Basis Case</B>, is a proof that P is true for the first member.  The second case, called the <B>Inductive Case</B>, is a conditional proof of, "if P is true of the nth member, then P is true of the (n + 1)th member.'  Where 'n' is some arbitrary member of the series and (n+1) is the next member of the series.  The hypothesis of this conditional proof is called the <I>Inductive Hypothesis</I>.  

 </P>  
<HR><HR><A NAME="921"></A><H2>921.  Metatheorem 1 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Mathematical Induction :: Examples :: Metatheorem 1 </I> ]</UL>
<H4>METATHEOREM</H4>                

   <P> Each item in series S (P, &#172;P, &#172;&#172;P, &#172;&#172;&#172;P, ...)  is a wff.

</P>  <H4>PROOF</H4>                

   <P> Basis Case<BR>
   The first member of S is 'P', which (by formation rule 1) is a wff.<BR>

   </P> <P> Inductive Case<BR>
   <UL> Suppose that the nth member of S is a wff.  (This is the inductive hypothesis; it initiates the conditional proof.)  Now the (n + 1)st member is the result of prefixing the nth with a negation sign.  Therefore (by formation rule 2 and the inductive hypothesis) the (n + 1)st item of S is a formula.</UL>

   </P> <P> Thus (by conditional proof) it follows that if the nth member of S is a wff, then so is the (n + 1)st.  Hence (by mathematical induction) each member of S is a wff.  QED  

 </P>  
<HR><HR><A NAME="922"></A><H2>922.  Metatheorem 2 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Mathematical Induction :: Examples :: Metatheorem 2 </I> ]</UL>
<H4>METATHEOREM</H4>                

   Each member of series T (P, (P &#8743; P), ((P &#8743; P) &#8743; P), (((P &#8743; P) &#8743; P) &#8743; P), ...) is logically equivalane to 'P'.

 <H4>PROOF</H4>                

   <P> Basis Case<BR>
   The first member of T is 'P', which (trivially) has the same truth value as 'P' on any valuation.  Hence the first item of T is logically equivalent to 'P'.

   </P> <P> Inductive Case<BR>
   <UL>Suppose that the nth member &#934; of T is logically equivalent to 'P'.  That is, &#934; is true on any valuation on which 'P' is true and false on any valuation on which 'P' is false.  Now the (n + 1)st member is of the form (&#934; &#8743; P).  On any valuation on which 'P' is true, therefore, both conjuncts of (&#934; &#8743; P) are true; similarly, on any valuation on which 'P' is false, both conjuncts of (&#934; &#8743; P) are false.  Thus, by the valuation rule of conjunction, (&#934; &#8743; P) is true on any valuation on which 'P' is true and false on any valuation on which (&#934; &#8743; P) is false.  Thus 'P' has the same truth value as (&#934; &#8743; P) on every valuation of both, and same truth value as (&#934; &#8743; P) on every valuation of both, and so (&#934; &#8743; P), which is the (n + 1)st member in the series, is logically equivalent to 'P'.</UL>

   </P> <P> Thus (by conditional proof) it follows that if the nth item of T is logically equivalent to 'P', then so is teh (n + 1)st.  Hence ( mathematical induction) each item of T is logically equivalent to 'P'.  QED  

 </P>  
<HR><HR><A NAME="923"></A><H2>923.  Metatheorem 3 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Mathematical Induction :: Examples :: Metatheorem 3 </I> ]</UL>
<H4>METATHEOREM</H4>                

   <P> For all n, the tree constructed by using the nth member of T (P<SUB>1</SUB>, (P<SUB>1</SUB> &#8744; .P<SUB>2</SUB>), ((P<SUB>1</SUB> &#8744; P<SUB>2</SUB>) &#8744; P<SUB>3</SUB>), (((P<SUB>1</SUB> &#8744; P<SUB>2</SUB>) &#8744; P<SUB>3</SUB>) &#8744; P<SUB>4</SUB>), ... ) as its initial list has exactly n paths.

</P>  <H4>PROOF</H4>                

   <P> Basis Case<BR>
   The first member of T is is 'P<SUB>1</SUB>'.  Since 'P<SUB>1</SUB>' is atomic, the tree constructed by using it as the initial list is finished as soon as 'P<SUB>1</SUB>' is written, and it contains one path.

   </P> <P> Inductive Case<BR>
   <UL>Suppose (inductive hypothesis) that the tree constructed by using the nth member of T as its initial list has exactly n paths.  Now the (n + 1)st member is obtained from the nth by disjoining it with 'P' subscripted by the numeral for n + 1.  Thus when the (n + 1)st member is used as the initial list of a tree, the only possible first more is to check it and branch to the nth member on t e left and to 'P' subscripted by the number for n + 1 on the right.  The right path is then finished, since the initial formula is checked and 'P' with it ssubscript is atomic.  And the left path below the initial formula will consist simply of the tree for the nth item of T, which by hypothesis has exactly n paths.  Hence the whole tree must contain exactly n + 1 paths.</UL>

   </P> <P> Thus we have shown ( by conditional proof) that if the tree constructed by using the nth member of T as its initial  list has exactly n paths, then the tree constructed by using the (n + 1)st member of T as its initial list has exacly n + 1 paths.  So (by mathetmatical induction) for all n, the tree constructed by using the nth member of T as its initial list has exactly n paths.  QED  

 </P>  
<HR><HR><A NAME="924"></A><H2>924.  Conditional Proof </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Templates :: Conditional Proof </I> ]</UL>
<DIV CLASS="GRAYBLOCK">

<H4>METATHEOREM</H4>                

   <P> if [ANTECEDENT], then [CONSEQUENT]

</P>  <H4>PROOF</H4>                

   <P> <UL>Suppose for conditional proof that [ANTECEDENT].
      <DIV CLASS="GRAYBLOCK">
      [Unpacking]<BR>
      [Logical Manipulation]<BR>
      [Repacking]
      </DIV>
      Therefore [CONSEQUENT]
      </UL>

   </P> <P> Hence (by coniditonal proof) if [ANTECEDENT], then [CONSEQUENT].
</P>

 </DIV> <H4>Notes</H4><UL>            

   <LI> [Unpacking] means taking defined terms used within the proof and <I>expanding</I> them, replacing them with their definitions.

   </LI> <LI> [Repacking] means taking definitions within the proof. and replacing them with the defined words.  

 </LI> </UL> 
<HR><HR><A NAME="925"></A><H2>925.  Reductio Ad Absurdum </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Templates :: Reductio Ad Absurdum </I> ]</UL>
<DIV CLASS="GRAYBLOCK">

<H4>METATHEOREM</H4>                

   <P> [CONCLUSION]

</P>  <H4>PROOF</H4>                

   <P> <UL>Suppose for reductio that [DENIAL OF CONCLUSION]
      <DIV CLASS="GRAYBLOCK">
      [Unpacking]<BR>
      [Logical Manipulation]<BR>
      </DIV>
      Therefore [CONTRADICTION]
      </UL>

   </P> <P> Hence (by reductio) [CONCLUSION].
</P>

 </DIV> <H4>Notes</H4><UL>            

   <LI> [Unpacking] means taking defined terms used within the proof and <I>expanding</I> them, replacing them with their definitions.  

 </LI> </UL> 
<HR><HR><A NAME="926"></A><H2>926.  Mathematical Induction </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Techniques for Prooving Metatheorems :: Templates :: Mathematical Induction </I> ]</UL>
<DIV CLASS="GRAYBLOCK">

<H4>METATHEOREM</H4>                

   <P> All members of [SERIES] have property F.

</P>  <H4>PROOF</H4>                

   <P> <B>Base Case</B><BR>
   <DIV CLASS="GRAYBLOCK"> (Style of argument here varies but is often trivial.) </DIV>
   Therefore the first member of [SERIES] has property F.

   </P> <P> <B>Inductive Case</B><BR>
      <UL>Suppose (with an inductive hypothesis) that the nth member of [SERIES] has property F.

         <DIV CLASS="GRAYBLOCK">
         [Unpacking]<BR>
         [Logical Manipulation]<BR>
         [Repacking]
         </DIV>
      Therefore the (n + 10st member of [SERIES] has property F.
      </UL>

   </P> <P> Hence (by conditional proof) we have shown that for any n, if the nth member of [SERIES] has property F, so does the (n  1)st.  Consequently (using matehematical induction to combine this conclusion with the conclusion of the basis case) all members of [SERIES] have property F.
</P>

 </DIV> <H4>Notes</H4><UL>            

   <LI> [Unpacking] means taking defined terms used within the proof and <I>expanding</I> them, replacing them with their definitions.

   </LI> <LI> [Repacking] means taking definitions within the proof. and replacing them with the defined words.  

 </LI> </UL> 
<HR><HR><A NAME="927"></A><H2>927.  Decidability </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Decidability </I> ]</UL>
<H4>Description</H4>                

   <P> In this section we prove the decidablity of Propositional Logic.

</P>  <H4>See Also</H4>                

   <P> Definition of <A HREF="#1285" TARGET="baseframe">Decidable</A>.  

 </P>  
<HR><HR><A NAME="928"></A><H2>928.  Tree Test </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Decidability :: Tree Test </I> ]</UL>
<H4>Description</H4>                

   <P> What we shall show is that the tree test (use of truth trees for testing argument validity) is a true decision procedure (algorithm for showing that the tree test is decidable).  To prove this, we shall show that:

</P>  <H4> </H4><OL>            

   <LI> The tree test for propositional logic is in fact a <I>terminating</I> algorithm.

   </LI> <LI> If the tree test classifies the sequent as valid (i.e., all paths of its finished tree close), then that sequent is valid.  (This is called the <B>soundness</B> of the logic.  A set of inference rules of a logic is sound iff it is not possible to derive a conclusion inconsistent with the premises.)

   </LI> <LI> If a sequent is valid, the tree test classifies that sequent as valid (i.e., all paths of its finished tree close).  (This is called the <B>completeness</B> of the logic.  A conclusion is a consequence of a set of premises iff it is derivable from the premises using the inference rules.)

</LI> </OL> <H4>See Also</H4><UL>            

   <LI> Definition of <A HREF="#1276" TARGET="baseframe">Algorithm</A>.  

 </LI> </UL> 
<HR><HR><A NAME="929"></A><H2>929.  Character Count </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Decidability :: Tree Test :: ...Is a Terminating Algorithm :: Character Count </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The <B>character count</B> of an open path is the total number of characters (logical operators, sentence letters, and parentheses -- numerical subscripts don't count) contained in unchecked formulas on that path.  the character count of a closed path is zero.

</P>  <H4>Notes</H4><UL>            

   <LI> For purposes of calculating the character count, the formation rules must be followed strictly.  This means that outer parentheses may not be dropped; they are included in the count.  

 </LI> </UL> 
<HR><HR><A NAME="930"></A><H2>930.  One-Step extension </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Decidability :: Tree Test :: ...Is a Terminating Algorithm :: One-Step extension </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A path P<SUB>2</SUB> is a <B>one-step extension</B> of a path P<SUB>1</SUB> iff P<SUB>2</SUB> is obtained from P<SUB>1</SUB> by applying a single tree rule to some unchecked formula or (in the case of the negation rule) pair of unchecked formulas of P<SUB>1</SUB>.  

 </P>  
<HR><HR><A NAME="931"></A><H2>931.  Lemma 1 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Decidability :: Tree Test :: ...Is a Terminating Algorithm :: Lemma 1 </I> ]</UL>
<H4>Description</H4>                

   <P> This first <A HREF="#1291" TARGET="baseframe">lemma</A> shows in effect that if an initial list has a finite character count (which is always the case), then any path it generates must be finitely long as well.  

 </P>  
<HR><HR><A NAME="932"></A><H2>932.  The Proof </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Decidability :: Tree Test :: ...Is a Terminating Algorithm :: Lemma 1 :: The Proof </I> ]</UL>
<DIV CLASS="GRAYBLOCK">
<H4>Lemma 1</H4>                

   <P> If the character count of the tree's initial list is <I>n</I>, then each path of the tree must be finished after at most <I>n</I> applications of the tree rules to formulas on that path.

</P>  <H4>Proof</H4>                

   <P> <UL>Suppose the character count of the tree's initial list is <I>n</I>.  Now when any of the rules is applied to a formula on a path <I>P</I>, each of the resulting one-step extensions of <I>P</I> has a character count at least one less than the character count of <I>P</I> (check this for each of the truth tree rules).  Further, the minimum character count for any path is zero.  Thus, since the character count of the initial list is <I>n</I>, and each application of a rule decreases the character count of the resulting one-step extensions by at least one, at most <I>n</I> applications of the rules can be made to formulas on a path before that pathe is finished.
   </UL>

   </P> <P> Hence, if the character count of the tree's initial list is <I>n</I>, then each path of the tree must be finished after at most <I>n</I> applications of the tree rules to formulas on that path.  QED
</P>  

  </DIV> 
<HR><HR><A NAME="933"></A><H2>933.  Infinitely Prolongable </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Decidability :: Tree Test :: ...Is a Terminating Algorithm :: Infinitely Prolongable </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A path is <B>infinitely prolongable</B> iff there exists an infinite series P<SUB>0</SUB>, P<SUB>1</SUB>, ... of paths such that P<SUB>0</SUB> = P and, for each <I>n</I>, P<SUB>n + </SUB> is a one-step extension of P<SUB>n</SUB>.

</P>  <H4>Notes</H4><UL>            

   <LI> This says that <B>infinitely prolongable</B> describes a path such that it is possible to infintely add a new branch to the end of a path so that the path becomes <I>endlessly longer</I>.  

 </LI> </UL> 
<HR><HR><A NAME="934"></A><H2>934.  Nonterminating </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Decidability :: Tree Test :: ...Is a Terminating Algorithm :: Nonterminating </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A path P is <B>nonterminating</B> iff there exists an infinite series T<SUB>0</SUB>, T<SUB>1</SUB>, ... such that T<SUB>0</SUB> = P and, for each n, T<SUB>n + 1</SUB> is the result of applying a single rule to an unchecked wff or (in the case of the negation rule) pair of unchecked wffs somewhere in T<SUB>n</SUB>.

</P>  <H4>Notes</H4><UL>            

   <LI> This concept describes a node from which it is possible to append or branch from any node in the tree from some node P or from any node somewhere below P (in P's subtree); and that it is possible to do this repeatedly -- not necessarily appending or branching from the same node each time, but any node below P.  Essentially, this describes an ability to grow a tree or subtree P unendlessly more 'bushy'.

   </LI> <LI> This concept seems more broad than <I>infinitely prolongable</I>.  

 </LI> </UL> 
<HR><HR><A NAME="935"></A><H2>935.  Lemma 2 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Decidability :: Tree Test :: ...Is a Terminating Algorithm :: Lemma 2 </I> ]</UL>
<H4>Description</H4>                

   <P> This next Lemma essentially extends the meaning of nonterminating.  

 </P>  
<HR><HR><A NAME="936"></A><H2>936.  The Proof </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Propositional :: Decidability :: Tree Test :: ...Is a Terminating Algorithm :: Lemma 2 :: The Proof </I> ]</UL>
<DIV CLASS="GRAYBLOCK">
<H4>Lemma 2</H4>                

   <P> If P is a nonterminating path, then P has a nonterminating one-step extension.

</P>  <H4>Proof</H4>                

   <P> <UL>Suppose (for conditional proof) that P is a nonterminating path.  that is, there is an infinite series T<SUB>0</SUB>, T<SUB>1</SUB>, ... such that T<SUB>0</SUB> = P and, for each n, T<SUB>n + 1</SUB> is the result of applying a single rule somewhere in T<SUB>n</SUB>.  Thus in particular T<SUB>1</SUB> is the result of applying a single rule to a formula or pair of formulas of P.  Since no single application of a rule can split a path into more than two paths, T<SUB>1</SUB> contains at most two paths -- maybe only one.

      <UL>Now suppose for reductio that P does not have a nonterminating one-step extension.  This means that no path of T<SUB>1</SUB> is nonterminating.  Hence there can't be an infinite succession of rule-applications starting with any path of T<SUB>1</SUB>.  But since T<SUB>1</SUB> has at most two paths, it follows that there can't be an infinite succession of rule-applications to T<SUB>1</SUB> itself, since the total number of rule-applications for T<SUB>1</SUB> is just the total number for its paths, and this number being the sum of at most two finite quantities, is finite.  Hence there is no infinite series T<SUB>0</SUB>, T<SUB>1</SUB>, ... such that T<SUB>0</SUB> = P and, for each <I>n</I>, T<SUB>n + 1</SUB> is the result of applying a single rule somewhere in T<SUB>n</SUB>, in contradiction to what we concluded earlier.
      </UL>

   Hence, contrary to our supposeition, P does have a nonterminating one-step extension.
   </UL>

   </P> <P> Thus we have shown that if P is a nonterminating path, then P has a nonterminating one-step extension.  

 </P>  </DIV>
<HR><HR><A NAME="937"></A><H2>937.  Aristotlean Logic </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic </I> ]</UL>
<H4>Description</H4>                

   <P> Here we examine Aristotelean Logic in relation to FOL and discover that the traditional interpretation is flawed in at least two areas.  

 </P>  
<HR><HR><A NAME="938"></A><H2>938.  vs. First-Order Logic </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic </I> ]</UL>
<H4>Description</H4>                

   <P> It's an interesting exercise to model Aristotlean logic (using the traditional interpretation) with FOL.  At first it seems that a trivial task.  However this soon turns out to be problematic.  Consider this proposition,  'All cats are mammals'.  By the square of oppositions, using subalternation, we can infer 'Some cats are mammals'.  And this seems a valid inference.  But if we try to build a FOL proof, we are soon stumped.  This is because such a proof is not possible.  But why?

   </P> <P> The reason, is that if we wish to deal with logic in a FORMAL sense, we may only accept as valid, forms for which ALL instances are valid.  Consider this alternate argument.

<PRE>
	All students who get 100 on all their tests will not have to take the final.
	So, there are some students who shall get 100 on all their these and won't have to take the final.
</PRE>

   </P> <P> This argument is of the same form as the above, but the conclusion is not necessarily true.  Maybe by the end of term it comes out that nobody got 100 on all their tests.  Thus, subalternation is simply not a valid formal rule of inference.  The root of the problem is simply this.  Does the subject of a universal proposition necessarily exist?  The modern interpretation says 'no'.  And this is reflected in FOL, which is why we can't derive the conclusion, and why the modern square of oppositions is quite a bit simpler than the traditional.  Aristotle is a bit more liberal in his interpretation, and the traditional square of opposition is based on the fact that the subject of universals DOES exist.

   </P> <P> So, if we wish to model Aristotle's logic, we must introduce an axiom.  

 </P>  
<HR><HR><A NAME="939"></A><H2>939.  Aristotle's Interpretation </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Aristotle's Interpretation </I> ]</UL>
<H4>Description</H4>                

   <P> According to the Stanford Encyclopedia of Logic (http://plato.stanford.edu), the article entitled <I>Square of Opposition</I>, Aristotle's interpretation was qute different from that traditionally accounted to him.  They argue that Aristotle gave existential import to the affirmatives, while the neagives lacked it.

   </P> <P> Only by this interpretation, is it possible to derive the entire Square of Opposition as theorems.

   </P> <P> Under this view, the four forms may be translated into FOL as folows:

<PRE>
A(S,P)  =def	Ax(Sx -> Px) ^ ExFx
E(S,P)  =def	Ax(Sx -> -Px)
I(S,P)  =def	Ex(Sx ^ Px)
O(S,P)  =def	Ex(Sx ^ Px) v -ExSx
</PRE>

   </P> <P> Under this interpretation, the Square of Opposition, plus the conversion transformation is completely coherent.  The remaining transformations do not cohere with the rest of the system, and in fact appear to be elements added by later scholars.  

 </P>  
<HR><HR><A NAME="940"></A><H2>940.  |-   -A(S,P) <-> O(S,P) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Square of Oppositions :: Contradictories :: |-   -A(S,P) <-> O(S,P) </I> ]</UL>
|-   -A(S,P) <-> O(S,P)

01.   |   -A(S,P)		H (for ->I)
02.   |   -(Ax(Sx -> Px) ^ ExSx)	1 def A-Form
03.   |   -Ax(Sx -> Px) v -ExSx	2 DM
04.   |   Ex-(Sx -> Px) v -ExSx	3 QE
05.   |   Ex-(-Sx v Px) v -ExSx	4 MI
06.   |   Ex(--Sx ^ -Px) v -ExSx	5 DM
07.   |   Ex(Sx ^ -Px) v -ExSx	6 DN
08.   |   O(S,P)		7 def O-Form
09.   -A(S,P) -> O(S,P)	1-8 ->I
08.   |   O(S,P)		H (for ->I)
09.   |   Ex(Sx ^ -Px) v -ExSx	8 def O-Form
10.   |   --Ex(Sx ^ -Px) v -ExSx	9 DN
11.   |   -Ax-(Sx ^ -Px) v -ExSx	10 QE
12.   |   -Ax(-Sx v --Px) v -ExSx	11 DM
13.   |   -Ax(-Sx v Px) v -ExSx	12 DN
14.   |   -Ax(Sx -> Px) v -ExSx	13 MI
15.   |   -(Ax(Sx -> Px) ^ ExSx)	14 DM
16.   |   -A(S,P)		15 def A-Form
17.   O(S,P) -> -A(S,P)	8-16 ->I
18.   -A(S,P) <-> O(S,P)	9,17 <->I  

 
<HR><HR><A NAME="941"></A><H2>941.  |-   -E(S,P) <-> I(S,P) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Square of Oppositions :: Contradictories :: |-   -E(S,P) <-> I(S,P) </I> ]</UL>
|-  -E(S,P) <-> I(S,P)

01.   |   -E(S,P)		H (for ->I)
02.   |   -Ax(Sx -> -Px)	1 def E-Form
03.   |   Ex-(Sx -> -Px)	2 QE
04.   |   Ex-(-Sx v -Px)		3 MI
05.   |   Ex(--Sx ^ --Px)	4 DM
06.   |   Ex(Sx ^ --Px)		5 DN
07.   |   Ex(Sx ^ Px)		6 DN
08.   |   I(S,P)		7 def I-Form
09.   -E(S,P) -> I(S,P)	1-8 ->I
10.   |   I(S,P)		H (for ->I)
11.   |   Ex(Sx ^ Px)		10 def I-Form
12.   |   --Ex(Sx ^ Px)		11 DN
13.   |   -Ax-(Sx ^ Px)		12 QE
14.   |   -Ax(-Sx v -Px)		13 DM
15.   |   -Ax(Sx -> -Px)	14 MI
16.   |   -E(S,P)		15 def E-Form
17.   Ex(Sx ^ Px) -> -Ax(Sx -> -Px)   10-16 ->I
18.   -Ax(Sx -> -Px) <-> Ex(Sx ^ Px)  9,17 <->I
19.   -E(S,P) <-> Ex(Sx ^ Px)	18 def E-Form
20.   -E(S,P) <-> I(S,P)	19 def I-Form  

 
<HR><HR><A NAME="942"></A><H2>942.  |-  -( A(S,P) ^  E(S,P) ) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Square of Oppositions :: Contraries :: |-  -( A(S,P) ^  E(S,P) ) </I> ]</UL>
|-  -( A(S,P) ^ E(S,P) )

01.   |   A(S,P) ^ E(S,P)		H (for -I)
02.   |   (Ax(Sx -> Px) ^ ExSx) ^ E(S,P)	1 def A-Form
03.   |   (Ax(Sx -> Px) ^ ExSx) ^ Ax(Sx -> -Px)   2 def E-Form
04.   |   Ax(Sx -> Px) ^ ExSx		3 ^E
05.   |   ExSx			4 EE
06.   |   |   [c] Sc			H (for EE)
07.   |   |   Ax(Sx -> Px)		4 ^E
08.   |   |   Sc -> Pc			7 AE
09.   |   |   Pc			6,8 ->E
10.   |   |   Ax(Sx -> -Px)		3 ^E
11.   |   |   Sc -> -Pc			10 AE
12.   |   |   -Pc			6,11 ->E
13.   |   |   P ^ -P			9,12 CON
14.   |   P ^ -P			5,6-13 EE
15.   -( A(S,P) ^ E(S,P) )		1-14 -I  

 
<HR><HR><A NAME="943"></A><H2>943.  I(S,P) v O(S,P) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Square of Oppositions :: Subcontraries :: I(S,P) v O(S,P) </I> ]</UL>
|-  I(S,P) v O(S,P)

01.   |   -( I(S,P) v O(S,P) )		H (for -I)
02.   |   -( Ex(Sx ^ Px) v O(S,P) )		1 def I-Form
03.   |   -( Ex(Sx ^ Px) v (Ex(Sx ^ -Px) v -ExSx)   2 def O-Form
04.   |   -Ex(Sx ^ Px) ^ -(Ex(Sx ^ -Px) v -ExSx)    3 DM
05.   |   -(Ex(Sx ^ -Px) v -ExSx)		4 ^E
06.   |   -Ex(Sx ^ -Px) ^ --ExSx		5 DM
07.   |   -Ex(Sx ^ -Px) ^ ExSx		6 DN
08.   |   -Ex(Sx ^ Px)			4 ^E
09.   |   -Ex(Sx ^ -Px)			7 ^E
10.   |   ExSx			7 ^E
11.   |   Ax-(Sx ^ Px)			8 QE
12.   |   Ax-(Sx ^ -Px)			9 QE
13.   |   Ax(-Sx v -Px)			11 DM
14.   |   Ax(-Sx v --Px)			12 DM
15.   |   Ax(-Sx v Px)			14 DN
16.   |   |   [c] Sc			H (for EE)
17.   |   |   -Sc v -Pc			13 AE
18.   |   |   -Sc v Pc			15 AE
19.   |   |   Sc -> -Pc			17 MI
20.   |   |   Sc -> Pc			18 MI
21.   |   |   -Pc			16,19 ->E
22.   |   |   Pc			16,20 ->E
23.   |   |   P ^ -P			21,22 CON
24.   |   P ^ -P			10,16-23 EE
25.   --( I(S,P) v O(S,P) )		1-24 -I
26.   I(S,P) v O(S,P)			25 -E  

 
<HR><HR><A NAME="944"></A><H2>944.  |-  A(S,P) -> I(S,P) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Square of Oppositions :: Subalternation :: |-  A(S,P) -> I(S,P) </I> ]</UL>
|-  A(S,P) -> I(S,P)

01.   |   A(S,P)		H (for ->I)
02.   |   Ax(Sx -> Px) ^ ExSx	1 def A-Form
03.   |   ExSx		2 ^E
04.   |   |   [c] Sc		H (for EE)
05.   |   |   Ax(Sx -> Px)	2 ^E
06.   |   |   Sc -> Pc		5 AE
07.   |   |   Pc		4,6 ->E
08.   |   |   Sc ^ Pc		4,7 ^I
09.   |   |   Ex(Sx ^ Px)		8 EI
10.   |   Ex(Sx ^ Px)		3,4-9 EE
11.   |   I(S,P)		10 def I-Form
12.   A(S,P) -> I(S,P)		1-11 ->I  

 
<HR><HR><A NAME="945"></A><H2>945.  |-  E(S,P) -> O(S,P) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Square of Oppositions :: Subalternation :: |-  E(S,P) -> O(S,P) </I> ]</UL>
|-  E(S,P) -> O(S,P)

01.   |   E(S,P)		H (for ->I)
02.   |   Ax(Sx -> -Px)		def E-Form
03.   |   |   -O(S,P)		H (for -I)
04.   |   |   -(Ex(Sx ^ -Px) v -ExSx)   3 def O-Form
05.   |   |   -Ex(Sx ^ -Px) ^ --ExSx	4 DM
06.   |   |   -Ex(Sx ^ -Px) ^ ExSx	5 DN
07.   |   |   Ax-(Sx ^ -Px) ^ ExSx	6 QE
08.   |   |   Ax(-Sx v --Px) ^ ExSx	7 DM
09.   |   |   Ax(-Sx v Px) ^ ExSx	8 DN
10.   |   |   ExSx		9 ^E
11.   |   |   |   [c] Sc		H (for EE)
12.   |   |   |   Ax(-Sx v Px)	9 ^E
13.   |   |   |   -Sc v Pc		12 AE
14.   |   |   |   --Sc		11 DN
15.   |   |   |   Pc		13,14 DS
16.   |   |   |   Sc -> -Pc	2 AE
17.   |   |   |   -Pc		11,16 ->E
18.   |   |   |   P ^ -P		15,17 CON
19.   |   |   P ^ -P		10, 11-18 EE
20.   |   --(Ex(Sx ^ -Px) v -ExSx)	3-19 -I
21.   |   Ex(Sx ^ -Px) v -ExSx	20 -E
22.   |   O(S,P)		21 def O-Form
23.   E(S,P) -> O(S,P)	1-22 ->I  

 
<HR><HR><A NAME="946"></A><H2>946.  E(S,P) <-> E(P,S) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Transforms :: Conversion :: E(S,P) <-> E(P,S) </I> ]</UL>
|-  E(S,P) <-> E(P,S)

01.   |   E(S,P)		H (for ->I)
02.   |   Ax(Sx -> -Px)		1 def E-Form
03.   |   Ax(--Px -> -Sx)	2 TRANS
04.   |   Ax(Px -> -Sx)		3 DN
05.   |   E(P,S)		4 def E-Form
06.   E(S,P) -> E(P,S)	1-5 ->I
07.   |   E(P,S)		H (for ->I)
08.   |   Ax(Px -> -Sx)		7 def E-Form
09.   |   Ax(--Sx -> -Px)	8 TRANS
10.   |   Ax(Sx -> -Px)		9 DN
11.   |   E(S,P)		10 def E-Form
12.   E(P,S) -> E(S,P)	7-11 ->I
13.   E(S,P) <-> E(P,S)	6,12 <->I  

 
<HR><HR><A NAME="947"></A><H2>947.  I(S,P) <-> I(P,S) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Transforms :: Conversion :: I(S,P) <-> I(P,S) </I> ]</UL>
|-  I(S,P) <-> I(P,S)

01.   |   I(S,P)	H (for ->I)
02.   |   Ex(Sx ^ Px)	1 def I-Form
03.   |   Ex(Px ^ Sx)	2 COM
04.   |   I(P,S)	3 def I-Form
05.   I(S,P) -> I(P,S)	1-4 ->I
06.   |   I(P,S)	H (for ->I)
07.   |   Ex(Px ^ Sx)	6 def I-Form
08.   |   Ex(Sx ^ Px)	7 COM
09.   |   I(S,P)	8 def I-Form
10.   I(P,S) -> I(S,P)	6-9 ->I
11.   I(S,P) <-> I(P,S)	5,10 <->I
  

 
<HR><HR><A NAME="948"></A><H2>948.  |- A(S,P) <-> E(S,-P) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Transforms :: Obversion :: |- A(S,P) <-> E(S,-P) </I> ]</UL>
Not possible in Aristotle's original Square of Opposition.  

 
<HR><HR><A NAME="949"></A><H2>949.  |- E(S,P) <-> A(S,-P) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Transforms :: Obversion :: |- E(S,P) <-> A(S,-P) </I> ]</UL>
Not possible in Aristotle's original Square of Opposition.  

 
<HR><HR><A NAME="950"></A><H2>950.  |- I(S,P) <-> O(S,-P) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Transforms :: Obversion :: |- I(S,P) <-> O(S,-P) </I> ]</UL>
Not possible in Aristotle's original Square of Opposition.  

 
<HR><HR><A NAME="951"></A><H2>951.  |- O(S,P) <-> I(S,-P) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Transforms :: Obversion :: |- O(S,P) <-> I(S,-P) </I> ]</UL>
Not possible in Aristotle's original Square of Opposition.  

 
<HR><HR><A NAME="952"></A><H2>952.  Contraposition </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Transforms :: Contraposition </I> ]</UL>
Not possible in Aristotle's original Square of Opposition.  

 
<HR><HR><A NAME="953"></A><H2>953.  A(S,P) <-> A(-P,-S) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Transforms :: Contraposition :: A(S,P) <-> A(-P,-S) </I> ]</UL>
Not possible in Aristotle's original Square of Opposition.  

 
<HR><HR><A NAME="954"></A><H2>954.  O(S,P) <-> O(-P,-S) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Aristotlean Logic :: vs. First-Order Logic :: Theorems :: Transforms :: Contraposition :: O(S,P) <-> O(-P,-S) </I> ]</UL>
Not possible in Aristotle's original Square of Opposition.  

 
<HR><HR><A NAME="955"></A><H2>955.  Validity of Reductio </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Validity of Reductio </I> ]</UL>
Leibniz' Axioms

excluded middle:	P v -P
non-contradiction:	-(P ^ -P)

reductio ad absurdium:
   inference rule:	(P  |-  Q ^ -Q)  |-  -P
   axiom:		(P -> (Q ^ -Q)) -> -P

How can we show that Reductio ad absurdium is valid?

First let's build a simple proof using it; Modus Tollens:

1.   P -> Q		A
2.   -Q		A
3.      P		H
4.      Q		1,3 ->E
5.      Q ^ -Q	2,4 ^I
6.   -P		3-5 -I

Now, redo the proof.  But use ->I instead of -I.

1.   P -> Q		A
2.   -Q		A
3.      P		H
4.      Q		1,3 ->E
5.      Q ^ -Q	2,4 ^I
6.   p -> ( Q ^ -Q )	3-5 ->I
7.   -P v ( Q ^ -Q )	6 -> Def of ->
8.   -P v F		7 Q ^ -Q = False
9.   -P		8 Identity Property

This is sort of interresting, it shows that non-contradiction and excluded middle are needed to prove the Reductio.  But we've used ->I here which is sort of like going backwards.  Let's try the proof without ->.

1.   -P v Q		A  (P -> Q)
2.   -Q		A
3.   -P		DS

But doesn't this proof seem like a cop-out?  Can we avoid using DS?


1.   -P v Q		A  (P -> Q)
2.   -Q		A
3.      P		H
4.      Q		1 DS
5.      Q ^ -Q	2,4 ^I
6.   -P v ( Q ^ -Q )	3-5 equiv to ->I
7.   -P v F		6 Q ^ -Q = False
8.   -P		7 Identity Property

Again, interresting.  But this proof requires the introduction of a new inference rule allowing us to introduce the premise -P v Q from a hypothetical argument with a hypothesis of P and conclusion of Q.  

 
<HR><HR><A NAME="956"></A><H2>956.  Deduction without conditionals </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: My Own Stuff :: Deduction without conditionals </I> ]</UL>
If you setup a truth table of:

P  Q  -P  P^Q    PvQ    P->Q    P<->Q
------------------------------------
T  T   F     T         T          T            T
T  F    F     F          T          F             F
F   T  T     F          T          T            F
F   F   T     F          F           T            T

You will see that in comparing an expression A with an expression B, when B is more generally true than A (true in at least all rows where A is true), then B is trivially inferable from A.  The following inferences (from the table) are all generalizations.

P ^ Q	|-  P <-> Q	|-  P -> Q
	|-  P	|-  P v Q
	    Q	|-  P v Q
		|-  P -> Q
-P	|-  P -> Q

So generalization of an expression is trivial.  Specialization requires justification from at least one additional premise.

Notice that P -> Q is inferred from -P or from Q.   ( -P v Q -- Hmmm ).  This implied to me that both conditional operators can be removed from the language.  -, ^ and v are truth functionally complete.  Perhaps they are proof unctionally complete as well.  Thus:

NAO Propositional calculus

-E:	--P  |-  P
-I:	( P  |-  Q ^ -Q )  |-  -P
^E:	P ^ Q  |-  P
^I:	P, Q  |-  P ^ Q
vE:	P v Q, -P v R, -Q v R  |-  R
vI:	P  |-  P v Q

There is only one kind of hypothetical argument for -I.

v-Idem: Disjunctive Idempotence
|-  P v -P

1.   -(P v -P)				H
2.      P					H
3.      P v -P				2 vI
4.      (P v -P) ^ -(P v -P)		1,3 ^I
5.   -P					2-4 -I
6.   P v -P					5 vI
7.   (P v -P) ^ -(P v -P)		1,6 ^I
8.--(P v -P)				1-7 -I
9.P v -P					8 -E


DS: Disjunctive Syllogism
P v Q, -P  |-  Q

 1.  P v Q					A
 2.  -P					A
 3.  -P v Q					2 vI
 4.  |  -(Q v -Q)				H
 5.  |  |  Q					H
 6.  |  |  Q v -Q				5 vI
 7.  |  |  (Q v -Q) ^ -(Q v -Q)	4,6 ^I
 8.  |  -Q					5-7 -I
 9.  |  Q v -Q				8 vI
10. |  (Q v -Q) ^ -(Q v -Q)	4,9 ^I
11.  Q v -Q				4-10 -I
12.  Q  				1,3,11 vE


HS: Hypothetical Syllogism
-P v Q, -Q v R  |-  -P v R

 1.  -P v Q					A
 2.  -Q v R					A
 3.  |  -(-P v R)				H
 4.  |  |  -Q					H
 5.  |  |  -P					1,4 DS
 6.  |  |  -P v Q				5 vI
 7.  |  Q					4-6 -I
 8.  |  R					2,7 DS
 9.  |  -P v R				9 vI
10. |  (-P v R) ^ -(-P v R)		3,9 ^I
11. --(-P v R)				3-10 -I
12. -P v R					-E  

 
<HR><HR><A NAME="957"></A><H2>957.  Set Theory </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory </I> ]</UL>
<H4>Description</H4>                

   <P> Axiomatic Set Theory  

 </P>  
<HR><HR><A NAME="958"></A><H2>958.  Dictionary </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Dictionary </I> ]</UL>
<H4>Variables</H4>                

   <P> a,b,c,...	sets

   </P> <P> x,y,z,...	any objects (including sets)

</P>  <H4>Predicates</H4>                

   x &#8712; a	(epsilon), x is a member of a  

  
<HR><HR><A NAME="959"></A><H2>959.  Axiom of Extensionality </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Axioms :: Axiom of Extensionality </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8704;a&#8704;b(&#8704;x(x &#8712; a  &#8596;  x &#8712; b) &#8594; a=b)

</LI> </UL> <H4>Meaning</H4>                

   <P> If sets a and b have the same elements, then a=b.  

 </P>  
<HR><HR><A NAME="960"></A><H2>960.  Axiom of Comprehension </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Axioms :: Axiom of Comprehension </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8707;a&#8704;x(x &#8712; a &#8596; Px)

</LI> </UL> <H4>Meaning</H4>                

   <P> A set is defined by every/only those objects in the universe of discourse which possesses some property P.  We also require that 'a' not occur in the WFF for P.

</P>  <H4>Notes</H4><UL>            

   <LI> This is not just one axiom, but an infinite collection of axioms, one for each wff P.

   </LI> <LI> Alternate form if additional variables are needed by P.<BR>
   &#8704;z&#8704;z'&#8704;z''...&#8707;a&#8704;x( x &#8712; a  &#8596;  Px )  

 </LI> </UL> 
<HR><HR><A NAME="961"></A><H2>961.  Proposition 1 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 1 </I> ]</UL>
<H4>Description</H4>                

   <P> Combining the axioms of Extensionality and Comprehension allows us to prove a claim about sets that clearly distinguishes them from properties.

</P>  <H4>Proposition</H4>                

   <P> For each wff Px we can prove that there is a unique set of objects that satisfy Px.  &#8704;z...&#8704;z'&#8707;!a&#8704;x(s &#8712; a  &#8596;  Px)

</P>  <H4>Proof</H4>                

   <P> We will prove the claim using Universal Introduction.  Let z,...,z' be arbitrary objects.  The axiom of comprehension assures us that there is at least one set of objects that satisfy P.  So we need only prove that there is at most one such set.  Suppose 'a' and 'b' are both sets that have as members exactly those things that satisfy P.  That is, 'a', and 'b' satisfy:

<PRE>
   &#8704;x(x &#8712; a &#8596; Px)
   &#8704;x(x &#8712; b &#8596; Px)
</PRE>

   </P> <P> But then it follows that 'a' and 'b' satisfy:

<PRE>
   &#8704;x(x &#8712; a &#8596; x &#8712; b)
</PRE>

   </P> <P> Applying the Axiom of Extensionality to this last claim gives us a=b.  Which is was we needed to prove.

</P>  <H4>Notes</H4><UL>            

   <LI> This proposition allows us to deduce the existence of the set of objects that satisfy P.  We sometimes write this informally as:

<PRE>
   { x | Px }, The se of x such that Px.
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="962"></A><H2>962.  Definition of Singleton </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Definition of Singleton </I> ]</UL>
<H4>Description</H4>                

   <P> A singleton set&#8797;a set for which only one object satisfies P.  Such a set contains only that one object.  

 </P>  
<HR><HR><A NAME="963"></A><H2>963.  Definition of Empty Set </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Definition of Empty Set </I> ]</UL>
<H4>Description</H4>                

   <P> <I>Empty Set</I>&#8797;a set for which no object satisfies P.  The set contains no objects.

   </P> <P> It is common to use &#8709; or { } as abreviations for the empty set.  

 </P>  
<HR><HR><A NAME="964"></A><H2>964.  Definition of Subset </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Definition of Subset </I> ]</UL>
<H4>Description</H4>                

   <P> 'a &#8838; b' &#8797; '&#8704;x(x &#8712; a  &#8594;  x &#8712; b)'<BR>
   Where, 'a' and 'b' are sets.

</P>  <H4>Meaning</H4>                

   <P> Given sets 'a' and 'b', we say that 'a' is a subset of 'b', written a &#8838; b, provided every member of 'a' is also a member of 'b'.

</P>  <H4>Notes</H4><UL>            

   <LI> &#8838; may be introduced as a predicate rather than a defintion:

<PRE>
   &#8704;a&#8704;b(a &#8838; b  &#8596;  &#8704;x(x &#8712; a &#8594; x &#8712; b))
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="965"></A><H2>965.  Proposition 2 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 2 </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8704;a(a &#8838; a)

</LI> </UL> <H4>Meaning</H4>                

   <P> For any set a, a &#8838; a

</P>  <H4>Proof</H4>                

   <P> Let 'a' be an arbitrary set.  For purposes of general conditional proof, assume that 'c' is an arbitrary member of 'a'.  The trivially (by reiteration), 'c' is a member of 'a'.  So &#8704;x(x &#8712; a &#8594; x &#8712; a).  But then we can apply our definition of subset to conclude that a &#8838; a.  Hence, &#8704;a(a &#8838; a).  

 </P>  
<HR><HR><A NAME="966"></A><H2>966.  Proposition 3 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 3 </I> ]</UL>
<H4>Form</H4>                

   <P> &#8704;a&#8704;b(a=b &#8596; (a &#8838; b &#8743; b &#8838; a))

</P>  <H4>Meaning</H4>                

   <P> For all sets 'a' and 'b', a=b if and only if a &#8838; b and b &#8838; a.

</P>  <H4>Proof</H4>                

   <P> Again, we us ehte method of Universal Introduction.  Let 'a' and 'b' be arbitrary sets.  To prove the biconditional, we first prove that if 'a=b' then 'a &#8838; b' and 'b &#8838; a'.  So, assume that 'a=b'.  We need to prove that 'a &#8838; b' and 'b &#8838; a'.  But this follows from Proposition 2 and two uses of the indiscrenability of identicals.  To prove the other direction of the biconditional, we assume that 'a &#8838; b' and 'b &#8838; a', and show that 'a=b'.  To prove this, we use the Axiom of Extensionality.  By that axiom, it suffices to prove that 'a' and 'b' have the same members.  But this follows from our assumption, which tells us that every meber of 'a' is a member of 'b' and vice-versa.  

 </P>  
<HR><HR><A NAME="967"></A><H2>967.  Definition of intersection </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Definition of intersection </I> ]</UL>
<H4>Form</H4>                

   <P> Let 'a' and 'b' be sets.

   </P> <P> The intersection of 'a' and 'b' is the set whose members are just those objects in both 'a' and 'b'.  This set is generally written 'a &#8745; b'.

<PRE>
   &#8704;a&#8704;b&#8704;z(z &#8712; (a &#8743; b)  &#8596;  (z &#8712; a &#8743; z &#8712; b))
</PRE>  

 </P>  
<HR><HR><A NAME="968"></A><H2>968.  Definition of union </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Definition of union </I> ]</UL>
<H4>Description</H4>                

   <P> Let 'a' and 'b' be sets.

   </P> <P> The union of 'a' and 'b' is the set whose members are just those objects in either 'a' or 'b' or both.  This set is generally written 'a &#8746; b'.

<PRE>
   &#8704;a&#8704;b&#8704;z(z &#8712; (a &#8746; b)  &#8596;  (z &#8712; a &#8744; z &#8712; b))
</PRE>  

 </P>  
<HR><HR><A NAME="969"></A><H2>969.  Proposition 4 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 4 </I> ]</UL>
<H4>Description</H4>                

   <P> Existance and uniqueness of 'a &#8745; b' for sets.

</P>  <H4>Proof</H4>                

   <P> For any pair of sets 'a' and 'b' there is one and only one set 'c' whose members are the objects in both 'a' and 'b'.

<PRE>
   &#8704;a&#8704;b&#8707;!c&#8704;x(x &#8712; c  &#8596;  (x &#8712; a ^ x &#8712; b))
</PRE>

or informally:

<PRE>
   c = { x | x &#8712; a &#8745; x &#8712; b }
</PRE>

</P>  <H4>Notes</H4>                

   <P> This proposition is actually just an instance of Proposition 1.  So it's a corollary (that is, an immediate consequence) of Proposition 1.  

 </P>  
<HR><HR><A NAME="970"></A><H2>970.  Proposition 5 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 5 </I> ]</UL>
<H4>Description</H4>                

   <P> Existance and uniqueness of 'a &#8746; b' for sets:

</P>  <H4>Proof</H4>                

   <P> For any pair of sets 'a' and 'b' there is one and only one set 'c' whose members are the objects in either  'a' or 'b' or both.

<PRE>
   &#8704;a&#8704;b&#8707;!c&#8704;x(x &#8712; c  &#8596;  (x &#8712; a &#8744; x &#8712; b))
</PRE>

or informally:

<PRE>
   c = { x | x &#8712; a &#8744; x &#8712; b }
</PRE>

</P>  <H4>Note</H4><UL>            

   <LI> This proposition is actually just an instance of Proposition 1.  So it's a corollary (that is, an immediate consequence) of Proposition 1.  

 </LI> </UL> 
<HR><HR><A NAME="971"></A><H2>971.  Proposition 6 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 6 </I> ]</UL>
<H4>Description</H4>                

   <P> Here are some theormes we can now prove:

<PRE>
Let a, b, and c be any sets.

1.  a &#8745; b  =  b &#8745; a
2.  a &#8746; b  =  b &#8746; a
3.  a &#8745; b  =  b, if and only if  b &#8838; a
4.  a &#8746; b  =  b, if and only if  a &#8838; b
5.  a &#8743; (b &#8746; c)  =  (a &#8745; b) &#8746; (a &#8745; c)
6.  a &#8746; (b &#8745; c)  =  (a &#8746; b) &#8745; (a &#8746; c)
</PRE>

</P>  <H4>Proof 1</H4>                

   <P> This follows quite easily from the definition of intersection and the Axiom of Extensionality.  To show that a ^ b  =  b ^ a, we need only show that a ^ b and b ^ a hav ethe same members.  By the definition of intersection, the members of a ^ b are the things that are in both a and b, whereas the members of b ^ a dre th things that are in both b and a.  These are clearly the same things.

</P>  <H4>Proof 3</H4>                

   <P> Let 'a' and 'b' be arbitrary sets.  We need to prove a^b = b iff b<=a.  To prove this, we give two conditional proofs.  First, assume a ^ b = b.  We need to prove that b <= a.  But htis means Ax(x e b -> x e a), so we will use the moeth of general conditional proof.  Let x be an arbitrary member of b.  We need to show that x e a.  But since b = a ^ b, we see that x e a^b.  Thus x e a ^ x e b by the definition of intersection.  The it follows, of course, that x e a, as desired.

   </P> <P> Now let's prove the other half of the biconditional.  Thus, assume that b <= a and let use prove that a ^ b = b.  By Proposition 3, it suffices to prove a ^ b <= b and b <= a ^ b.  The first of these is easy, and does not even use our assumption.  So let's prove the second, that b <= a ^ b.  That is, we must prove that Ax(x e b -> x e (a ^ b)).  This is proven by general conditional proof.  Thuse, let x be an arbitrary member of b.  We need to prove that x e a ^ b. But by our assumption, b <= a, so x e a.  Hence, x e a^b, as desired.  

 </P>  
<HR><HR><A NAME="972"></A><H2>972.  Proposition 7 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 7 </I> ]</UL>
<H4>Description</H4>                

   <P> (Unordered Pairs)  For any objects x and y there is a (unique) set a = {x,y}.

<PRE>
   &#8704;x&#8704;y&#8707;!aAw(w &#8712; a  &#8596;  (w=x &#8744; w=y))
</PRE>

</P>  <H4>Proof</H4>                

   <P> Let x and y be arbitrary objects, and let

<PRE>
   a = { w | w=x &#8744; w=y }
</PRE>

   </P> <P> The existence of 'a' is guaranteed by Comprehension, and its uniqueness follows from the Axiom of Extensionality.  Clearly 'a' has 'x' and 'y' and notheing else as elements.  

 </P>  
<HR><HR><A NAME="973"></A><H2>973.  Proposition 8 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 8 </I> ]</UL>
(Singletons)  for any object 'x' there is a singleton set {x}.

Proof:  To prove this, apply the premiouse proposition in the case where x=y.  

 
<HR><HR><A NAME="974"></A><H2>974.  Definition of Ordered Tuples </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Definition of Ordered Tuples </I> ]</UL>
<H4>Description</H4>                

   <P> For any objects 'x' and 'y', we take the ordered pair <x,y> to be the set { {x}, {x,y} }.

<PRE>
   &#8704;x&#8704;y <x,y> = { {x}, {x,y} }
</PRE>

   </P> <P> Further, we can represent an ordered triple as <x,y,z> as <x, <y,z>>.

   </P> <P> More generally, we can represent ordered n-tuples as <x1,<x2,..., xn>>.

</P>  <H4>Notes</H4><UL>            

   <LI> n-tuple notation <x1, ..., xn> is not part of the language of set theory.  

 </LI> </UL> 
<HR><HR><A NAME="975"></A><H2>975.  Relations of Tuples </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Relations of Tuples </I> ]</UL>
A relation is a binary predicate used to define a set of tuples.  There are many properties held by relations.  A property can be expressed as conditions on the Axiom of Extension of the relation.  

 
<HR><HR><A NAME="976"></A><H2>976.  Transitive </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Relations of Tuples :: Properties :: Transitive </I> ]</UL>
<H4>Form</H4><UL>            

   &#8704;x&#8704;y&#8704;z((Rxy &#8743; Ryz) &#8594; Rxz)  

 </UL> 
<HR><HR><A NAME="977"></A><H2>977.  Reflexive </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Relations of Tuples :: Properties :: Reflexive </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8704;x Rxx  

 </LI> </UL> 
<HR><HR><A NAME="978"></A><H2>978.  Irreflexive </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Relations of Tuples :: Properties :: Irreflexive </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8704;x &#172;Rxx  

 </LI> </UL> 
<HR><HR><A NAME="979"></A><H2>979.  Symmetrical </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Relations of Tuples :: Properties :: Symmetrical </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8704;x&#8704;y(Rxy &#8594; Ryx)  

 </LI> </UL> 
<HR><HR><A NAME="980"></A><H2>980.  Asymmetrical </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Relations of Tuples :: Properties :: Asymmetrical </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8704;x&#8704;y(Rxy &#8594; &#172;Ryx)  

 </LI> </UL> 
<HR><HR><A NAME="981"></A><H2>981.  Antisymmetrical </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Relations of Tuples :: Properties :: Antisymmetrical </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8704;x&#8704;y((Rxy &#8743; Ryx) &#8594; x=y)  

 </LI> </UL> 
<HR><HR><A NAME="982"></A><H2>982.  Inverse Relations </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Relations of Tuples :: Inverse Relations </I> ]</UL>
<H4>Description</H4>                

   <P> Given any set-theoretic binary relation R on a universe of discourse D, the inverse of that relation is the relation R(-1) (Where the parenthesis indicate that '-1' should be a superscript) defined by

<PRE>
   R(-1) = { < x, y > | < y, x > &#8712; R }
</PRE>

   </P> <P> Thus, 'larger' and 'smaller' are inverse relations.  

 </P>  
<HR><HR><A NAME="983"></A><H2>983.  Equivalence Relations & Classes. </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Relations of Tuples :: Equivalence Relations & Classes. </I> ]</UL>
<H4>Description</H4>                

   <P> Any relation that has the properties of reflexivity, symmetry and transitivity such as 'being the same shape as'.

   </P> <P> The set of objects that fulfill an equivalence relation R with some x is called the equivalence class, notation is [x]R.

<PRE>
   [x]R = { y | <x,y> e R }
</PRE>  

 </P>  
<HR><HR><A NAME="984"></A><H2>984.  Proposition 9 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 9 </I> ]</UL>
<H4>Description</H4>                

   <P> Let R be an equivalence relation on a set D.

<PRE>
1.  For each x, x e [x].
2.  For all x, y, [x] = [y], if and only if <x,y> e R.
3.  For all x, y, [x] = [y], if and only if -( [x] ^ [y] = { } ).
</PRE>

</P>  <H4>Proof</H4>                

   <P> #1 follows from the fact that 'R' is reflexive on D.  #2 is more substantive.  Suppose that [x] = [y].  By #1, y c [y], so y e [x].  But then by the definition of [x], <x,y> e R.  For the converse, suppose that <x,y> e R.  We need to show that [x] = [y].  to do this, it suffices to prove that [x] <= [y] and [y] <= [x].  We prove the first, the second being entirely similar.  Let z e [x].  We need to show that z e [y].  Since z e [x], <x,z> e R.  From the fact that ,x,y> e R, using symmetry, we obtain <y,x> e R.  By transitivity, from <y,x> e R and <x,z> e R we obtain <y,z> e R.  But then z e [y], as desired.  The proof of #3 is similar.  

 </P>  
<HR><HR><A NAME="985"></A><H2>985.  Definition of Function </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Functions :: Definition of Function </I> ]</UL>
<H4>Description</H4>                

   <P> A relation R on a set D is said to be a function if it satisfies the following condition:

<PRE>
   &#8704;xE(<=1)y Rxy
</PRE>

   </P> <P> i.e.  A relation is a function if for any "input" (x) there is at most one "output" (y).  If the function also has the following property, then it is called a total function on D:

<PRE>
   &#8704;x&#8707;y Rxy
</PRE>

</P>  <H4>Notes</H4><UL>            

   <LI> It is more common to write f(x) = y rather than <x,y> e f when f is a function.  

 </LI> </UL> 
<HR><HR><A NAME="986"></A><H2>986.  Definition of Domain </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Functions :: Definition of Domain </I> ]</UL>
<H4>Description</H4>                

   <P> The domain of a function f is the set

<PRE>
   { x | &#8707;y( f(x) = y ) }
</PRE>

   </P> <P> It is common to say that a function f is "defined on x" if x is in the domain of f.  

 </P>  
<HR><HR><A NAME="987"></A><H2>987.  Definition of Range </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Functions :: Definition of Range </I> ]</UL>
<H4>Description</H4>                

   <P> The range of a function f is the set

<PRE>
   { y | &#8707;x( f(x) = y }
</PRE>  

 </P>  
<HR><HR><A NAME="988"></A><H2>988.  Definition of Powerset </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Definition of Powerset </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A powerset is the set of all subsets of a given set denoted p(b) (The p is supposed to be cursive).  

 </P>  
<HR><HR><A NAME="989"></A><H2>989.  Proposition 10 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 10 </I> ]</UL>
<H4>Description</H4>                

   <P> For any set 'b' there is a unique set whose members are just the subsets of 'b'.

<PRE>
   &#8704;b&#8707;c&#8704;x(x &#8712; c  &#8596;  x .sub<=. b)
</PRE>

</P>  <H4>Proof</H4>                

   <P> By the Axiom of Comprehension, we may form the set c = { x | x .sub<=. b }.  This is the desired set.  By the Axiom of Extensionality, there can be only one such set.

</P>  <H4>Examples</H4><UL>            

   <LI> b = { 2, 3 }<BR>
   p(b) = { { }, {2}, {3}, {2,3} }  

 </LI> </UL> 
<HR><HR><A NAME="990"></A><H2>990.  Proposition 11 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 11 </I> ]</UL>
<H4>Description</H4>                

   <P> Let 'a' and 'b' be any sets.

<PRE>
   1.  b &#8712; p(b)
   2.  { } &#8712; p(b)
   3.  a .sub<=. b, iff p(a) .sub<=. p(b)
</PRE>  

 </P>  
<HR><HR><A NAME="991"></A><H2>991.  Proposition 12 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 12 </I> ]</UL>
<H4>Description</H4>                

   <P> For any set 'b', it is not the case that p(b) &#8838; b.

   </P> <P> My Note:  This is a bit confusing, but what it's saying is that no set can have all of its subsets as elements).

</P>  <H4>Proof</H4>                

   <P> Let 'b' be any set.  We want to prove that 'p(b) not &#8838; b'.  To prove this, we construct a particular subset of 'b' that is not an element of 'b'.  Let

<PRE>
   c = { x | x &#8712; b &#8743; &#172;(x &#8712; x) }
</PRE>

   </P> <P> by the Axiom of Comprehension.  This set 'c' is clearly a subset of 'b' since it was defined to consist of those members of 'b' satisfying some additional condition.  It follows from the definition of the powerset operation that 'c' is an element of p(b).  We will show that 'c not &#8712; b'.

   </P> <P> Toward a proof by contradiction, suppose that 'c &#8712; b'.  Then either 'c &#8712; c' or 'c not &#8712; c'.  But which?  It is not hard to see that neither can be the case.  First, suppose that 'c &#8712; c'. Then by our definition of 'c', 'c' is one of those members of 'b' that is left out of 'c'.  So 'c not &#8712; c'.  Next consider the possibility that 'c not &#8712; c'.  But then 'c' is one of those members of 'b' that satisfies the defining condition for 'c'.  Thus 'c &#8712; c'.  Thus we have proven that 'c &#8712; c  &#8596;  c not &#8712; c', which is a contradiction.  So our assumption that 'c &#8712; b' must be false, so 'p(b) not &#8838; b'.  

 </P>  
<HR><HR><A NAME="992"></A><H2>992.  Definition of Russell set </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Definition of Russell set </I> ]</UL>
<H4>Description</H4>                

   <P> The proof for Proposition 12 shows how to take any set 'b' and find a set 'c' which is a subset of 'b' but not a member of 'b', namely the set:

<PRE>
   c = { x | x &#8712; b &#8743; x not &#8712; x }
</PRE>

</P>  <H4>Examples</H4><UL>            

   <LI> b = { 0, 1 }<BR>
   Russell set is b itself.

   </LI> <LI> b = { 0, { 0, { 0, ... } } }<BR>
   Russell set is { 0 } since b &#8712; b.  

 </LI> </UL> 
<HR><HR><A NAME="993"></A><H2>993.  Proposition 13 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 13 </I> ]</UL>
<H4>Description</H4>                

   <P> For any set 'b', the Russel set for 'b', the set

<PRE>
   { x | x &#8712; b &#8743; x not &#8712; x }
</PRE>

is a subset of 'b' but not a member of 'b'.

   </P> <P> This result is, as we will set, a very important result, one that immediately implies Proposition 12.  

 </P>  
<HR><HR><A NAME="994"></A><H2>994.  Proposition 14 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Proposition 14 </I> ]</UL>
<H4>Description</H4>                

   <P> There is a set 'c' such that p(c) &#8838; c.

</P>  <H4>Proof</H4>                

   <P> Using the Axiom of Comprehension, there is a universal set, a set that contains everything.  This is the set c = { x | x = x }.  But then every subset of 'c' is a member of 'c', so p(c) is a subset of 'c'.  

 </P>  
<HR><HR><A NAME="995"></A><H2>995.  Russell's Paradox </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Naive Set Theory :: Theorems & Definitions :: Russell's Paradox </I> ]</UL>
<H4>Description</H4>                

   <P> Proposition 14 contradicts Proposition 12.

   </P> <P> It says that the universal set both is and is not a subset of itself.  This indicates that the axioms of 'naive set theory' are inconsistant.

   </P> <P> Russell's Paradox - Spoon Fed

<PRE>
   Let Z be the collection of all sets which do not contain themselves as members,  that is,

      Z = { x | x not &#8712; x }

   Question:  Does Z belong to itself or not?
</PRE>  

 </P>  
<HR><HR><A NAME="996"></A><H2>996.  Axiom of Separation </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Zermelo Frankel :: Axiom of Separation </I> ]</UL>
<H4>Description</H4>                

   <P> It's been decided that the problem with naive set theory is that the Axiom of Comprehension allows sets that are 'too large'.  The Axiom of Separation starts by only allowing the formation of subsets of previously given sets.  If we are given a set 'a' and a wff Px then we may form the subset of 'a' by:

<PRE>
   { x | x &#8712; a &#8743; P(x) }
</PRE>

expressed formally as:

<PRE>
   &#8704;x&#8707;b&#8704;x( x &#8712; b  &#8596;  (x &#8712; a &#8745; Px) )
</PRE>

   </P> <P> It's actually just a modified Axiom of Comprehension.  And just like Comprehension, P may contain universally quantified variables.

   </P> <P> This axiom cannot be used to prove that the set of all sets exists.  It in fact can be used to show that it doesn't exist.

   </P> <P> However, this axiom is far too restrictive.  I blocks some legitimat uses made by Comprehension.

   </P> <P>ZFC set theory uses Separation instead of Comprehension, but then throws back in all legitimate uses of Comprehension.  

 </P>  
<HR><HR><A NAME="997"></A><H2>997.  ... of Extensionality </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Zermelo Frankel :: Axioms :: ... of Extensionality </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> &#8704;a&#8704;b(&#8704;x(x &#8712; a  &#8596;  x&#8712; b) &#8594; a=b)

</LI> </UL> <H4>Translation</H4>                

   <P> If sets a and b have the same elements, then a=b.  

 </P>  
<HR><HR><A NAME="998"></A><H2>998.  ... of Separation </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Zermelo Frankel :: Axioms :: ... of Separation </I> ]</UL>
<H4>Description</H4>                

   <P> It's been decided that the problem with naive set theory is that the Axiom of Comprehension allows sets that are 'too large'.  The Axiom of Separation starts by only allowing the formation of subsets of previously given sets.  If we are given a set 'a' and a wff Px then we may form the subset of 'a' by:

<PRE>
   { x | x &#8712; a &#8743; P(x) }
</PRE>

expressed formally as:

<PRE>
   &#8704;x&#8707;b&#8704;x( x &#8712; b  &#8596;  (x &#8712; a &#8743; Px) )
</PRE>

It's actually just a modified Axiom of Comprehension.  And just like Comprehension, P may contain universally quantified variables.
  

 </P>  
<HR><HR><A NAME="999"></A><H2>999.  Unordered Pair Axiom </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Zermelo Frankel :: Axioms :: Unordered Pair Axiom </I> ]</UL>
<H4>Description</H4>                

   <P> For any two objects there is a set that has both as elements.  

 </P>  
<HR><HR><A NAME="1000"></A><H2>1000.  Union Axiom </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Zermelo Frankel :: Axioms :: Union Axiom </I> ]</UL>
<H4>Description</H4>                

   <P> Given any set 'a' of sets, the union of all the members of 'a' is also a set.  That is:

<PRE>
   &#8704;a&#8707;b&#8704;x( x &#8712; b  &#8596;  &#8707;c(c &#8712; a &#8743; x &#8712; c) )
</PRE>  

 </P>  
<HR><HR><A NAME="1001"></A><H2>1001.  Powerset Axiom </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Zermelo Frankel :: Axioms :: Powerset Axiom </I> ]</UL>
<H4>Description</H4>                

   <P> Every set has a powerset.  

 </P>  
<HR><HR><A NAME="1002"></A><H2>1002.  Axiom of Infinity </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Zermelo Frankel :: Axioms :: Axiom of Infinity </I> ]</UL>
<H4>Description</H4>                

   <P> There is a set of all natural numbers.  

 </P>  
<HR><HR><A NAME="1003"></A><H2>1003.  Axiom of Replacement </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Zermelo Frankel :: Axioms :: Axiom of Replacement </I> ]</UL>
<H4>Description</H4>                

   <P> Given any set 'a' and any operation 'F' that defines a unique object for each 'x' in 'a', there is a set.

<PRE>
   { F(x) | x &#8712; a }
</PRE>

that is,<BR>
if  &#8704;x( x &#8712; a &#8743; &#8707;!y Pxy ), then there is a set b = { y | &#8707;x(x &#8712; a &#8743; Pxy) }  

 </P>  
<HR><HR><A NAME="1004"></A><H2>1004.  Axiom of Choice </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Zermelo Frankel :: Axioms :: Axiom of Choice </I> ]</UL>
<H4>Description</H4>                

   <P> If f is a function with non-empty domain 'a' and for each 'x &#8712; a', f(x) is a non-empty set then there is a function 'g' also with domain 'a' such that for each 'x &#8712; a', 'g(x) &#8712; f(x)'.  (The function 'g' is called a choice function for 'f' since it chooses an element of 'f(x)' for each 'x &#8712; a'.)  

 </P>  
<HR><HR><A NAME="1005"></A><H2>1005.  Axiom of Regularity </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Set Theory :: Zermelo Frankel :: Axioms :: Axiom of Regularity </I> ]</UL>
<H4>Description</H4>                

   <P> No set has a nonempty intersection with each of its own elements.  That is:

<PRE>
   &#8704;b( b not = { }  &#8594;  &#8707;y(y &#8712; b &#8743; (y &#8745; b = { })) )
</PRE>  

 </P>  
<HR><HR><A NAME="1006"></A><H2>1006.  Propositional Logic </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic </I> ]</UL>
Analize the ability of a calculus to prove all and only those propositions that it should be able to prove.

proving 'all' is called completeness.
proving 'only' is called soundness.  

 
<HR><HR><A NAME="1007"></A><H2>1007.  Truth--Functional Completeness </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Truth--Functional Completeness </I> ]</UL>
Refers to the question of weather a given set of truth-functional operators is complete enough to represent all posible truth-functional operations.

It turns out that NAND and NOR are each complete enough as to be able to replace all other truth functional operations.  

 
<HR><HR><A NAME="1008"></A><H2>1008.  Soundness (Fitch/Barwise) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Soundness (Fitch/Barwise) </I> ]</UL>
A sound calculus is one in which it is not possible to infer a false proposition given that the premises are consistant.

Proof:

In some proof p, show that each step of p is a truth-functional/logical consequence of the assumptions & hypotheses in force at that step.  To prove this claim, we use proof by contradiction.  We suppose that there is a step that is not a consequence of the assumptions & hypothesis in force at that step.  We look at the first such invalid step and show that none of the rules of Fitch could have justified that step.  Thus, we must apply proof by cases to show that, no matter which rule of Fitch was applied at the invalid step, we get a contradiction.

cases follow:  

 
<HR><HR><A NAME="1009"></A><H2>1009.  Inductive Definition </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Truth Tables & Consequence :: Inductive Definition </I> ]</UL>
Let Q and R be arbitrary propositions.
Let h be any function from a proposition into the set { true, false }.

That is, each function h representes one row of the reference column of a truth table, and h(Q) gives the truth value of some proposition for that particular row.

let h' be a function which tells us when an expression to true (or false) -- in a sense, it fills in the rows of the truth table.  

 
<HR><HR><A NAME="1010"></A><H2>1010.  Tautology </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Truth Tables & Consequence :: Revised Definitions :: Tautology </I> ]</UL>
A sentence S is a tautology if every truth assignment h has S comming out true.

	that is, for every h, h'(S) = true  

 
<HR><HR><A NAME="1011"></A><H2>1011.  Truth-Functional Consequence </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Truth Tables & Consequence :: Revised Definitions :: Truth-Functional Consequence </I> ]</UL>
A sentence S is a truth-functional consequence of a set T of sentences provided every truth assignment  that makes all the sentences in T true also makes S true.  

 
<HR><HR><A NAME="1012"></A><H2>1012.  Satisfiable </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Truth Tables & Consequence :: Satisfiable </I> ]</UL>
A sentence S is satisfiable provided there is a truth assignment h such that h'(x) = true.  (i.e. S is satisfiable if at least one row in the truth table is true).

A set T of sentences is satisfiable if there is a single assignment h that makes each of the sentences in T true.  (i.e. T is satisfiable if there is one row for which every sentence in T is true).

This may seem confusing, but just think of a truth table.  There are two ways to think of satisfiability.
- Is there a situation (row) where all formulas are true?
- Is there a situation (row) which shows that it would be possible to form a conclusion from the set of formulas?  (such a row would be all true - consider how an argument is validated by a truth table).  

 
<HR><HR><A NAME="1013"></A><H2>1013.  Proposition 1 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Truth Tables & Consequence :: Proposition 1 </I> ]</UL>
The sentence S is a truth-functional consequence of the set T if and only if the set 'T union { &#172;S }' is not tt-satisfiable.

Note that if T is finite, we can reduce the question of whether S is a truth-functional consequence of T to the question of whether T ^ &#172;S is not satisfiable.

Proof:  We know from the definition of consequence that for every truth assignment h that makes all of the sentences in T true, h(S) is also true.  So this set of sentences T and { S } is satisfiable.  But if we now create h(&#172;S) (think of negating all the values in the truth table under S), then we no longer have any truth assignments h in which all the sentences in T are true and so is h(&#172;S).

The other direction is the converse of the above.  

 
<HR><HR><A NAME="1014"></A><H2>1014.  Completeness Theorem </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem </I> ]</UL>
Theorem:  If a sentence S is a truth-functional consequence of a set T of sentences then T |- S.

To show this, we must use proof by contradiction.  We know that for T |- S to be valid, S must be true when all sentences in T are true.  But we can't prove this directly.  We must prove the converse.  That is, if there is no proof of S from T, then S is not a consequence of T.  And if 'T not |- S' then there is an h (see Truth Tables & Consequence) that makes all the sentences in T true, but S false.  In other words, we'll show that 'T union { &#172;S }' is satisfiable.

This last statement is confusing.  But look at it this way.  Proposition 1 says the following:

	T  |-  S  <->  &#172;Satisfiable( T union { &#172;S } )

If we negate both sides:

	T not |- S  <->  Satisfiable( T union { &#172;S } )

Which is what is said above.  

 
<HR><HR><A NAME="1015"></A><H2>1015.  Lemma 2 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Lemma 2 </I> ]</UL>
Lemma 2.  'T union { &#172;S } |- Q ^ &#172;Q, iff T |- S'

Proof:  Assume that 'T union { &#172;S } |- Q ^ &#172;Q', i.e. that there is a proof of contradiction from premises &#172;S and certain sentences P1, ..., Pn of T.  By rearranging these premises, we can suppose the formal proof has the following form:

	P1, ..., Pn, &#172;S  |-  Q ^ &#172;Q

We can use this proof to construct a formal proof of S from T.  Start with a proof with premises P1, ..., Pn.  Immediately begin a subproof with assumption &#172;S.  In that subproof, repeat the original proof of contradiction.  End the subproof and use &#172;Intro to conclude &#172;&#172;S from P1, ..., Pn.  Then apply &#172;Elim to get S.  The resulting proof will look like this:

	P1, ..., Pn, ( &#172;S  |-  Q ^ &#172;Q )  |-  &#172;&#172;S  |-  S

This formal proof shows that T |- S, as desired.

Now for the other direction.  Assuming T |- S, we start by listing our premises P1, ..., Pn from which we are able to conclude S.

	P1, ..., Pn  |-  S

Now if we introduce a subproof where we immedicately assume &#172;S we must surely reach a contradiction since we are able to conclude S from our premises.

	P1, ..., Pn, ( &#172;S  |-  Q ^ &#172;Q )  |-  &#172;&#172;S  |-  S

So in adding &#172;S to our premises we are able to infer a contradiction.

	P1, ..., Pn, &#172;S  |-  Q ^ &#172;Q

Which is was we desired to prove.  

 
<HR><HR><A NAME="1016"></A><H2>1016.  Completeness Theorem (Restated) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem (Restated) </I> ]</UL>
Theorem:  Every formally consistent set of sentences is satisfiable.

(see Formalization|Sentence Classifications|Consistencies)

There are three chunks to proving.  First prove Completeness for formally complete sets, then to Extend to formally complete sets.  Finally to put it all together.  

 
<HR><HR><A NAME="1017"></A><H2>1017.  Definition of Formally Complete </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem (Restated) :: Completeness for Formally Complete Sets :: Definition of Formally Complete </I> ]</UL>
A set T is formally complete if for any sentence S of the language, either T |- S or T |- &#172;S.

By 'any sentence S of the language' we mean any sentence that can be written in the language using all predicates, objects, operators, sentence symbols etc.  Thus, the set T is pretty strong.  

 
<HR><HR><A NAME="1018"></A><H2>1018.  Lemma 3 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem (Restated) :: Completeness for Formally Complete Sets :: Lemma 3 </I> ]</UL>
Let T be a formally consistent, formally complete set of sentences, and let R and S be any sentences of the language.  Then:

1.  T |- (R ^ S), iff T |- R and T |- S
2.  T |- (R v S), iff T |- R or T |- S
3.  T |- &#172;S, iff T &#172;|- S
4.  T |- (R -> S), iff T &#172;|- R or T |- S
5.  T |- (R <-> S), iff either T |- R and T |- S or T &#172;|- R and T &#172;|- S

Because these are iff's we need to prove each side entails the other.  

 
<HR><HR><A NAME="1019"></A><H2>1019.  Proof of 1 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem (Restated) :: Completeness for Formally Complete Sets :: Lemma 3 :: Proof of 1 </I> ]</UL>
T  |-  (R ^ S), iff T  |-  R and T  |-  S

First, assume that T  |-  (R ^ S).  We will show that T  |-  R.  The proof that T  |-  S will be exactly the same.  Since T  |-  (R ^ S), there is a formal proof of (R ^ S) from T.  And we can take this proof and add one more step.  At this step, write the desired sentence R (or S), using the rule ^Elim.

For the other direction, let us usppose that T  |-  R and T  |-  S.  Thus, there are proofs of each of R and S from premises in T.  What we need to do is "merge" these two proofs into one.  Suppose the proof of R uses the premises P1, ..., Pn and looks like:

	P1, ..., Pn  |-  R

And suppose the proof of S uses the premises Q1, ..., Qm and looks like:

	Q1, ..., Qm  |-  S

To merge these two proofs into a single proof, we simply take the premises of both and put them into a single list.  The we follow them with the steps from the proof of R, followed by the steps from the proof of S.  The citations in these steps need to be renumbered, but other than that, the result is a legitimate proof in fitch.  At the end of this proof, we add a ^Intro of R and S to get R ^ S:

	P1, ..., Pn, Q1, ..., Qm  |-  R  |-  S  |-  R ^ S  

 
<HR><HR><A NAME="1020"></A><H2>1020.  Proof of 2 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem (Restated) :: Completeness for Formally Complete Sets :: Lemma 3 :: Proof of 2 </I> ]</UL>
T  |-  (R v S), iff T |- R or T |- S

The proof from right to left is easy, using the rule of vIntro.  That is Given that T |- R we can add a new step to the end of the proof, vIntro to get from R to R v S.  We gan act similarly going from T |- S to T |- R v S.

For the proof of left to right.  That is that if T |- (R v S) then T |- R or T |- S.  This is not true in general, but is for formally consistent, formally complete sets.

Assume that T |- (R v S), but, toward a proof by contradiction, that 'T not |- R' and 'T not |- S'.  Since T is formally complete, it follows that 'T |- &#172;R' and 'T |- &#172;S'.  This means that we have two formal proofs p1 and p2 from premises in T, p1 having &#172;R as a conclusion and p2 having &#172;S as a conclusion.  As we have seen, we can merge these two proofs into one long proof p that has both of these conclusions.  Then, by ^Intro, we can prove &#172;R ^ &#172;S.  But then using DeMorgans we can extend this proof to get &#172;(R v S).  Thus 'T |- &#172;(R v S).  But by assumption we also have T |- (R v S).  By merging the proofs of &#172;(R v S) and R v S we can get a proof of contradiction by adding a single step.  But this means that T is formally inconsistent, contradicting our assumption that it is formally consistent.  

 
<HR><HR><A NAME="1021"></A><H2>1021.  Proof of 3 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem (Restated) :: Completeness for Formally Complete Sets :: Lemma 3 :: Proof of 3 </I> ]</UL>
One direction of part 3 follows immediately from the definition of formal completeness, while the left to right half follows easily from the definition of formal consistency.  

 
<HR><HR><A NAME="1022"></A><H2>1022.  Proof of 4 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem (Restated) :: Completeness for Formally Complete Sets :: Lemma 3 :: Proof of 4 </I> ]</UL>
Part 4 is similar to part 2.  

 
<HR><HR><A NAME="1023"></A><H2>1023.  Proof of 5 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem (Restated) :: Completeness for Formally Complete Sets :: Lemma 3 :: Proof of 5 </I> ]</UL>
Part 5 is similar to part 2.  

 
<HR><HR><A NAME="1024"></A><H2>1024.  Proposition 4 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem (Restated) :: Completeness for Formally Complete Sets :: Proposition 4 </I> ]</UL>
Every formally consistent, formally complete set of sentences is satisfiable.

Proof:  Let T be the formally consistent, formally complete set of sentences.  Define an assignment h on the atomic sentences of the language as follows.  If T |- A then let h(A) = true; otherwise let h(A) = false.  Then the funtion h' is defined on all the sentences of our language, atomic or complex.  We claim that:

	for all wffs S, h'(S) = true iff T |- S

The proof of this is a good example of the importance of proofs by induction on wffs. the claim is true for all atomic wffs from the way that h is defined, and the fact that h and h' agree on atomic wffs.  We now show that if the claim holds of wffs R and S, then it holds of (R ^ S), R v S), &#172;R, (R -> S) and (R <-> S).  These all follow easily from Lemma 3.  Consider the case of disjunction, for example.  We need to verify that h'(R v S) = true iff T |- (R v S).  To prove the 'only if' half, assume that 'h(R v S) = true.  Then, by the definition of h', either h'(R) = true or h'(S) = true or both.  Then, by the induction hypothesis, either T |- R or T |- S or both.  But then by the lemma, T |- (R v S), which is what we wanted to prove.  The other direction is proved in a similar manner.

From the fact that we have just established, it follows that th assignment h makes every sentence provable from T true.  Since every sentence in T is certainly provable from T, by 'Reit' if you like, it follows that h makes every sentence in T true.  Hence T is satisfiable, which is what we wanted to prove.  

 
<HR><HR><A NAME="1025"></A><H2>1025.  Extending to Formally Complete Sets </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem (Restated) :: Extending to Formally Complete Sets </I> ]</UL>
The next step is to get from formally consistent sets of wffs to sets that are both formally consistent and formally complete.  

 
<HR><HR><A NAME="1026"></A><H2>1026.  Lemma 5 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem (Restated) :: Extending to Formally Complete Sets :: Lemma 5 </I> ]</UL>
A set of sentences T is formally complete if and only if for every atomic sentence A, T |- A or T |- &#172;A.

Proof:  The direction from left to right is just a consequence of the definition of formal completeness.  The direction from right to left is another example of a proof by induction on wffs.  Assume that T |- A or T |- &#172;A for every atomic sentence A.  We use induction to show that for any sentence S, T |- S or T |- &#172;S.  The basis of the induction is given by our assumption.  Let's prove the dusjunction case.  That is, assume S is of the form P v Q.  By our inductive hypothesis, we know that T settles each of P and Q.  If T proves either one of these, then we know that T |- P v Q by vIntro.  So suppose that T |- &#172;P and T |- &#172;Q.  By merging these proofs and adding a step, we get a proof of &#172;P ^ &#172;Q.  We can continue this proof to get a proof of &#172;(P v Q), showing that T |- &#172;S, as desired.  The other inductive steps are similar.  

 
<HR><HR><A NAME="1027"></A><H2>1027.  Proposition 6 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem (Restated) :: Extending to Formally Complete Sets :: Proposition 6 </I> ]</UL>
Every formally consistent set of sentences T can be expanded to a formally consistent, formally complete set of sentences.

Proof:  Let us form a List A1, A2, A3, ..., of all th atomic sentences of our language, a in alphabetical order.  The go through these sentences one at a time.  Whenever you encounter a sentence Ai such that neither Ai nor &#172;Ai is provable from the set, add Ai to the set.  Notice that doing so can't make the set formally inconsistent.  If you could prove contradiction from the new set, then you could prove &#172;Ai from the previous set, by Lemma 2.  But if that were the case, you wouldn't have thrown Ai into the set.

The end result of this process is a set of sentences which, by the preceding lemma, is formally complete.  It is also formally consistent;  after all, any proof of contradiction is a finite object, and so could use at most a finite numbe rof premises.  But then it would be a proof of contradiction at some stage of this process, when all those premises had been thrown in.  

 
<HR><HR><A NAME="1028"></A><H2>1028.  Putting things together </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Semantic Completeness Theorem :: Completeness Theorem (Restated) :: Putting things together </I> ]</UL>
The final proof for completeness:

Proof:  Suppose 'T not |- S'.  Then by lemma 2, 'T union { &#172;S }' is formally consistent.  This set can be expanded to a formally consistent, formally complete set, which by our Proposition 4 is satisfiable.  Suppose h is a truth value assignment that satisfies this set.  Clearly h makes all the members of T true, but S false, showing that S is not a truth-functional consequence of T.

  

 
<HR><HR><A NAME="1029"></A><H2>1029.  Compactness Theorem </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Compactness Theorem </I> ]</UL>
Theorem:  Let T be any set of sentences of truth-functional logic.  If every finite subset of T is satisfiable, the T itself is satisfiable.

Proof:  We prove the contrapositive of the claim.  Assume that T is not satisfiable.  Then by the Completeness theorem, the set T is not formally consistent.  But this means that T |- Q ^ &#172;Q.  But a proof of contradiction from T can buse only finitely many premises from T.  Let P1, ..., Pn be these premises.  By the Soundness Theorem, P1, ..., Pn are not satisfiable.  Consequently, there is a finite subset of T that is not satisfiable.  

 
<HR><HR><A NAME="1030"></A><H2>1030.  Horn Sentences </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Derivative Applications :: Horn Sentences </I> ]</UL>
Horn sentences provide a relatively efficient way of discovering if a set of sentences is satisfiable.

A Horn sentence has one or more terms.

	term ^ term ^ ...

A term may take any of the following forms:

	(&#172;A1 v ... v &#172;An v B)
	(&#172;A1 v ... v &#172;An)
	B

That is, a term may have at most one nonnegated disjunct.	  

 
<HR><HR><A NAME="1031"></A><H2>1031.  Using conditionals </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Derivative Applications :: Horn Sentences :: Using conditionals </I> ]</UL>
A term can be replaced with a conditional.

	(&#172;A1 v ... v &#172;An v B)

becomes

	(A1 ^ ... ^ An) -> B

The other forms of terms require the following proposition.  

 
<HR><HR><A NAME="1032"></A><H2>1032.  Proposition 7 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Derivative Applications :: Horn Sentences :: Proposition 7 </I> ]</UL>
Any Horn sentence of propositional logic is logically equivalent to a conjunction of conditional statements of the following three forms, where the Ai and B stand for ordinary atomic sentences:

1.  (A1 ^ ... ^ An) -> B
2.  (A1 ^ ... ^ An) -> (Q ^ &#172;Q)
3.  (Q v &#172;Q) -> B  

 
<HR><HR><A NAME="1033"></A><H2>1033.  Satisfaction Algorithm (1) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Derivative Applications :: Horn Sentences :: Satisfaction Algorithm (1) </I> ]</UL>
Suppose we have a Horn sentence S built out of atomic sentences A1, ..., An.  Here is an efficient procedure for determining whether S is satisfiable.  

 
<HR><HR><A NAME="1034"></A><H2>1034.  1 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Derivative Applications :: Horn Sentences :: Satisfaction Algorithm (1) :: 1 </I> ]</UL>
Start out as though you were going to build a truth table, by listing all the atomic sentences in a row, follows by S.  But do not write True or False beneath any of them yet.  

 
<HR><HR><A NAME="1035"></A><H2>1035.  2 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Derivative Applications :: Horn Sentences :: Satisfaction Algorithm (1) :: 2 </I> ]</UL>
Check to see which if any of the atomic sentences are themselves conjuncts of S.  If so, write TRUE in the reference column under these atomic sentences.  

 
<HR><HR><A NAME="1036"></A><H2>1036.  3 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Derivative Applications :: Horn Sentences :: Satisfaction Algorithm (1) :: 3 </I> ]</UL>
If some of the atomic sentences are now assigned TRUE, then use these to fill in as much as you can of the right hand side of the table.  For example, if you have written TRUE under A5, then you will write FALSE wherever you find &#172;A5.  This, in turn, may tell you to fill in some more atomic sentences with TRUE.  For example, if &#172;A1 v A3 v &#172;A5 is a conjunct of S, and each of &#172;A1 and &#172;A5 have been assigned FALSE, then write TRUE under A3.  Proceed in this way until you run out of things to do.  

 
<HR><HR><A NAME="1037"></A><H2>1037.  Satisfaction Algorithm (2) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Derivative Applications :: Horn Sentences :: Satisfaction Algorithm (2) </I> ]</UL>
Satisfaction algorithm for Horn sentences in conditional form

Suppose we have a Horn sentence S in conditional form, built out of atomic sentences A1, ..., An, as well as (Q v &#172;Q) and (Q ^ &#172;Q).  

 
<HR><HR><A NAME="1038"></A><H2>1038.  1 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Derivative Applications :: Horn Sentences :: Satisfaction Algorithm (2) :: 1 </I> ]</UL>
If there are any conjuncts of the form (Q v &#172;Q) -> A1, write TRUE in the reference column under each such Ai.  

 
<HR><HR><A NAME="1039"></A><H2>1039.  2 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Derivative Applications :: Horn Sentences :: Satisfaction Algorithm (2) :: 2 </I> ]</UL>
If one of the conjuncts is of the form (B1 ^ ... ^ Bk) -> A where you have assigned TRUE to each of B1, ..., Bk, then assign TRUE to A.  

 
<HR><HR><A NAME="1040"></A><H2>1040.  3 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Derivative Applications :: Horn Sentences :: Satisfaction Algorithm (2) :: 3 </I> ]</UL>
Repeat step 2 as often as possible.  

 
<HR><HR><A NAME="1041"></A><H2>1041.  4 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Derivative Applications :: Horn Sentences :: Satisfaction Algorithm (2) :: 4 </I> ]</UL>
Again, one of two things will happen.  You may reach a point where you are forced to assign FALSE to one of a conditional of the form (B1 ^ ... ^ Bk) -> (Q ^ &#172;Q) because you have assigned TRUE to each of the Bi.  In this case you must assign FALSE to S, in which case S is not satisfiable.  If this does not happen, then fill in the remaining reference columns of atomic sentences with FALSE.  This will give a truth assignment that makes all the conditionals true and hence S true as well.  

 
<HR><HR><A NAME="1042"></A><H2>1042.  Theorem </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: Propositional Logic :: Derivative Applications :: Horn Sentences :: Theorem </I> ]</UL>
The algorithem for the satisfiability of Horn sentences is correct, in that it classifies as satisfiable exactly the satisfiable Horn sentences.

Proof:  There are two things to be proved here.  One is that any satisfiable sentence is classified as satisfiable by the algorithm.  The other is that anything classfied by the algorithm as satisfiable really is satisfiable.  We are going to prove this result for the form of the algorithm that deals with conditionals.  Before getting down to work, let's rephrase the algorithm with a bit more precision.  Define sets T0, T1, ... of atomic sentences, together with (Q v &#172;Q) and (Q ^ &#172;Q), as follows.  Let T0 = { Q v &#172;Q }.  Let Tn be the set consisting of (Q v &#172;Q) together with all atomic sentences 'A' such that (Q v &#172;Q) -> A is a conjunct of S.  More generally, given Tn, define Tn+1 to be Tn together with all atomic sentences 'A' such that for some B1, ..., Bk in Tn, (B1 ^ ... Bk) -> A is a conjunct of S.  Notice that Tn subsetOf Tn+1 for each n.  Since there are only finitely many atomic sentences in S, eventually we must have Tn = Tn+1.  The algorithm declares S to be satisfiable if and only if '(Q ^ &#172;Q) not in Tn'.  Furthermore, it claims that if (Q ^ &#172;Q) is not in Tn, then we can get a truth assignment for S be assigning TRUE to each atomic sentence in Tn and assigning FALSE to the rest.

To prove the first half of correctness, we will show that if S is satisfiable, then '(Q ^ &#172;Q) not in Tn'.  Let h be any truth assignment that makes S true.  An easy proof by induction on n shows that h(A) = TRUE for each A in Tn.  Hence (Q ^ &#172;Q) not in Tn, since h(Q ^ &#172;Q) = FALSE.

To prove the other half of correctness, we suppose that (Q ^ &#172;Q) not in Tn and define an assignment h by letting h(A) = TRUE for A in Tn, and letting h(A) = FALSE for the other atomic sentences of S.  We need to show that h'(S) = TRUE.  To do this, it suffices to show that h'(C) = TRUE for each conditional C that is a conjunct of S.  There are three types of conditionals to consider:

case 1:  The conjunct is of the form (Q v &#172;Q) -> A.  In this case A is in T1.  But then h' assignes TRUE to the A and so the conditional.

case 2:  The conjunct is of the form (A1 ^ ... ^ An) -> B.  If each of the Ai gets assigned TRUE, then each is in Tn and so B is in Tn+1 = Tn.  But then h' assigns TRUE to B and so to the conditional.  On the other hand, if one of the Ai gets assigned FALSE then the conditional comes out true under h'.

case 3:  The conjunct is of the form (A1 ^ ... ^ An) -> (Q ^ &#172;Q).  Since we are assuming (Q ^ &#172;Q) not in Tn, at least one of Ai is not Tn, so it gets assigned FALSE by h.  But then the antecedent of conditional comes out false under h' so the whole conditional comes out true.  

 
<HR><HR><A NAME="1043"></A><H2>1043.  Domain of Discourse </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Domain of Discourse </I> ]</UL>
The set of all objects under consideration.  The domain of discourse is usually denoted by a 'D'.

Eg.

A highschool student may ask his/her parents, "May I go to the dance?  Everyone is going."

Here, everyone refers to every student (not every person on the planet).  

 
<HR><HR><A NAME="1044"></A><H2>1044.  Extensions of Predicates </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Domain of Discourse :: Extensions of Predicates </I> ]</UL>
The extension of a unary predicate P is a subset of D, which contains every object of D which satisfies P.

The extension of an n-ary predicate P (where n > 1) is the set of ordered pairs <x,y>, where 'x,y e D', which satisfy P.  

 
<HR><HR><A NAME="1045"></A><H2>1045.  Referents </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Domain of Discourse :: Referents </I> ]</UL>
A referent is the object to which an object symbol refers.  

 
<HR><HR><A NAME="1046"></A><H2>1046.  Model </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Domain of Discourse :: Model </I> ]</UL>
(aka structure)

A model is a single object (or package) to represent the whole world.  It packages the domain of discourse, the extensions of the predicates and the referents of the names.

M is the 'model' function.  It's domain set is the predicates, the names and the quantifier 'A'.  This function is called a 'first-order' structure provided the following are satisfied:

1.  M(universal quantifier) is a nonempty set D, called the 'domain of discourse' of M.

2.  If P is an n-ary predicate symbol fo the language then M(P) is a set of n-tuples <x1, ..., xn> of elements of D.  This set is called the 'extension' of P in M.  It is required that the extension of the identity symbol consist of all pairs <x,x>, for 'x e D'.

3.  If 'c' is any name of the language, then M(c) is an element of D, and is called the 'referent' of c in M.

Instead of writing M(O) it is more common to write O(M), where M is a superscript.  

 
<HR><HR><A NAME="1047"></A><H2>1047.  Variable Assignments </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Truth & Satisfaction :: Variable Assignments </I> ]</UL>
Let M be a first-order structure with domain D.  A variable assignment in M is, by definition, some (possibly partial) function 'g' defined on a set of variables and taking valuse in the set D.  Thus, for example, if D = { a,b,c } then the following would all be variable assignments in M:

1.  The function g1 which assigns 'b' to the variable 'x'.

2.  The function g2 which assigns 'a', 'b' and 'c' to the variables 'x', 'y' and 'z' respectively.

3.  The function g3 which assigns 'b' to all the variables of the language.

4.  The function g4 which is the empty function, that is, does not assign values to any variables  

 
<HR><HR><A NAME="1048"></A><H2>1048.  Appropriate Assignments </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Truth & Satisfaction :: Appropriate Assignments </I> ]</UL>
Given a wff P, we say that the variable assignment 'g' is appropriate for P if all the free variables of P are in the domain of 'g', that is, if 'g' assigns objects to each free variable of P.  Thus the four variable assignments g1, g2, g3 and g4 listed above would have been appropriate for the following sorts of wffs, respectively:

1.  g1 is appropriate for any wff with the single free variable 'x', or with no free variables at all.

2.  g2 is appropriate for any wff whose free variables are a subset of { x, y, z }.

3.  g3 is appropriate for any wff at all.

4.  g4 (also written g0) is appropriate for any wff with no free variables, that is, for sentences, but not for wffs with free variables.  

 
<HR><HR><A NAME="1049"></A><H2>1049.  Modified Variable Assignments </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Truth & Satisfaction :: Modified Variable Assignments </I> ]</UL>
The notation g[z/b] signifies a modified variable assignment, where if a function g contains any variable assignments, they are established first.  Then the assignment z=b is established (which may override a default assignment).

Below are some examples relating respectively to the earlier examples.

1.  g1 assigns b to the variable x, so g1[y/c] assigns b to x and c to y.  By contrast, g1[x/c] assigns a value only to x, the value c.

2.  g2 assings a, b, c to the variables x, y, z respectively.  Then g2[x/b] assigns the values b, b, c to x, y, z respectively. The assignment g2[u/c] assigns the values c, a, b and c to the variables u,x,y,z respectively.

3.  g3 assigns b to all the variables of the language.  g3[y/b] is the same assignment, g3, but g3[y/c] is different.  It assigns c to y and b to every other variable.

4.  g4, the empty function, does not assign values to any variables.  Thus g4[x/b] is the function whih assigns b to x.  Notice that this is the same function as g1.  

 
<HR><HR><A NAME="1050"></A><H2>1050.  [[t]](M/g) </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Truth & Satisfaction :: [[t]](M/g) </I> ]</UL>
<H4>t</H4>                (M/g)  where M is a superscript and g is a subscript.

What variable assignments do is allow us to treat free variables as if they have temporary denotation (ie. as if they refer to some object)

Thus, if a variable assignment g is appropriate for a wff P, then between M and g, all the terms (constants and variables) in P have a denotation.  For any term t, we write [[t]](M/g) for the denotation of t.

Thus, [[t]](M/g) is t(M) (where M is a superscript) if t is an individual constant and g(t) if t is a variable.

  

  
<HR><HR><A NAME="1051"></A><H2>1051.  Definition of Satisfaction </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Truth & Satisfaction :: Definition of Satisfaction </I> ]</UL>
This is a broad definition of satisfaction which includes wff's which contain free variables.

Let P be a wff and let 'g' be an assignment in M which is appropriate for P.

1.  The atomic case.  Suppose P is R(t1, ..., tn), where R is an n-ary predicate.  Then 'g' satisfies P in M if and only if the n-tuple ([[t1]](M/g), ..., [[tn]](M/g) ) is in R(M) (where M is a superscript).

2.  Negation.  Suppose P is &#172;Q.  Then 'g' satisfies P in M if and only if 'g' does not satisfy Q.

3.  Conjunction.  Suppose P is Q ^ R.  Then 'g' satisfies P in M if and only if 'g' satisfies both Q and R.

4.  Disjunction.  Suppose P is Q v R, then 'g' satisfies P in M if and only if g satisfies Q or R or both.

5.  Conditional.  Suppose P is Q -> R.  Then 'g' satisfies P in M if and only if 'g' does not satisfy Q or 'g' satisfies R or both.

6.  Biconditional.  Suppose P is Q <-> R.  Then 'g' satisfies P in M if and only if 'g' satisfies both Q and R or neither.

7.  Universal Quantification.  Suppose P is AvQ.  Then 'g' satisfies P in M if and only if for every 'd e D(M-superscript)', g[v/d] satisfies Q.

8.  Existential Quantification.  Suppose P is EvQ.  Then 'g' satisfies P in M if and only if for some 'd e D(M-superscript)', g[v/d] satisfies Q.

It is customary to write  M |= P[g]  to indicate that the variable assignment 'g' satisfies wff P in the structure M.  

 
<HR><HR><A NAME="1052"></A><H2>1052.  Definition of Truth </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Truth & Satisfaction :: Definition of Truth </I> ]</UL>
Let L be some first-order language and let M be a structure for L.  A sentence P of L is true in M if and only if the empty variable assignment g0 satisfies P in M.  Otherwise P is false in M.

Note:  This is just a rigorous definition of what is true a true sentence in first order logic.  It requires the Definition of Satisfaction.  

 
<HR><HR><A NAME="1053"></A><H2>1053.  Predicate Trut Value </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Truth & Satisfaction :: Definition of Truth :: Predicate Trut Value </I> ]</UL>
The following is Varzi's somewhat less formal treatment of truth.

Given a model, every atomic formula P bult up from those symbols is assigned a truth-value according to well defined rules.  

 
<HR><HR><A NAME="1054"></A><H2>1054.  If P consists of a predicate letter followed by a single name letter, then P is assigned the value T if the objecte designated by... </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Truth & Satisfaction :: Definition of Truth :: Predicate Trut Value :: If P consists of a predicate letter followed by a single name letter, then P is assigned the value T if the objecte designated by... </I> ]</UL>
If P consists of a predicate letter followed by a single name letter, then P is assigned the value T if the objecte designated by the name letter is a member of the class designated by the predicate latter; otherwise P is assigned the value F.  

 
<HR><HR><A NAME="1055"></A><H2>1055.  If P consists of a predicate letter followed by two or more name letters, then P is assigned the value T if the objects designated b </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Truth & Satisfaction :: Definition of Truth :: Predicate Trut Value :: If P consists of a predicate letter followed by two or more name letters, then P is assigned the value T if the objects designated b </I> ]</UL>
If P consists of a predicate letter followed by two or more name letters, then P is assigned the value T if the objects designated by the name letters stand in the relation designated by the predicate letter; otherwise P is assigned the value F.  

 
<HR><HR><A NAME="1056"></A><H2>1056.  A universal quantification AxP is true in a model M if the wff P(a/x) is true in every a-variant of M, where 'a' is the first name </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Truth & Satisfaction :: Definition of Truth :: Predicate Trut Value :: A universal quantification AxP is true in a model M if the wff P(a/x) is true in every a-variant of M, where 'a' is the first name </I> ]</UL>
A universal quantification AxP is true in a model M if the wff P(a/x) is true in every a-variant of M, where 'a' is the first name letter in the alphabetc order not occurring in P and P(a/x) is the result of replacing all occurences of x in P by a; if P(a/x) is false in some a-variant of M, then AxP is false in M.  

 
<HR><HR><A NAME="1057"></A><H2>1057.  An existential quantification ExP is true in M if the wff P(a/x) is true in some a-variant of M, where 'a' nd P(a/x) (as described  </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Truth & Satisfaction :: Definition of Truth :: Predicate Trut Value :: An existential quantification ExP is true in M if the wff P(a/x) is true in some a-variant of M, where 'a' nd P(a/x) (as described  </I> ]</UL>
An existential quantification ExP is true in M if the wff P(a/x) is true in some a-variant of M, where 'a' and P(a/x) are as in the previous rule; if P(a/x) is false in every a-ariant of M, then ExP is false in M.  

 
<HR><HR><A NAME="1058"></A><H2>1058.  Proposition 1 </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Truth & Satisfaction :: Proposition 1 </I> ]</UL>
Let M1 and M2 be structures which have the same domain and assign the same interpretations to the predicates and constant symbols in a wff P.  Let g1 and g2 be variable assignments that assign the same objects to the free variables in P.  Then M1 |= P[g1] iff M2 |= P[g2].  

 
<HR><HR><A NAME="1059"></A><H2>1059.  Definition: First-Order Consequence </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Definition: First-Order Consequence </I> ]</UL>
A sentence Q is a first-order consequence of a set T = { P1, ... } of sentences if and only if every structure that makes all the sentences in T true also makes Q true.  

 
<HR><HR><A NAME="1060"></A><H2>1060.  Definition: First-Order Validity </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Definition: First-Order Validity </I> ]</UL>
A sentence P is a first-order validity if and only if every structure makes P true.  

 
<HR><HR><A NAME="1061"></A><H2>1061.  Theorem: Soundness </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Theorem: Soundness </I> ]</UL>
If T |- S, then S is a first-order consequence of set T.

Proof:  The proof is very similar to the proof of the Truth-Functional soundness theorem.  We will show that any sentence that occurs at any step in a proof p is a first order consequence of the assumptions in force at that step (which include the premises of p).  This claim applies not just to sentences at the main level of proof p, but also to sentences appearing in subproofs, no matter how deeply nested.  The theorem follows from this claim because if S appears at the main level of p, then the only assumptions in force are the premises daws from T.  So S is a first-order consequence of T.

Call a step of a proof valid if the sentence at that setp is a first-order consequence of the assumptions in force at that step.  Our earlier proof of soundness for fitch was actually a disguised for of induction on the number of the step in question.  Since we had not yet discussed induction, we disguised this by assuming there was an invalid step and considering the first of these.  When you think about it, you see that this is really just the inductive step in an inductive proof.  Assuming we have the first invalid step allows us to assume that all the earlier steps are valid, which is the inductive hypothesis, and then prove (by contradiction) that the current step is valid after all.  We could proceed in the same way here, but we will instead make the induction explicit.  We thus assume tht we are at the nth step, that all earlier steps are valid, and show that this tep is valid as well.

The proof is by cases, depending on which rule is applied at step n.  The cases for the rules for the truth-functional connectives work out pretty much as before.  We will look at one, to point out the similarity to our earlier soundness proof.  

 
<HR><HR><A NAME="1062"></A><H2>1062.  ->E </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Theorem: Soundness :: ->E </I> ]</UL>
->Elim:  Suppose the nth step derives the sentence R from an application of ->Elim to senteces Q -> R and Q appearing earlier in the proof.  Let A1, ..., Ak be a list of all the assumptions in force at step n.  By our induction hypothesis we know that Q -> R and Q are both established at valid steps, that is, they are first-order consequences of the assumptions in force at those steps.  Furthermore, since fitch only allows us to cite sentences in the main proof or in subproofs whose assumptions are still in force, we know that the assumptions in force at steps Q -> R and Q are also in force at R.  Hence, the assumptions for these steps are among A1, ..., Ak.  Thus, both Q -> R and Q are first-order consequences of A1, ..., Ak.  We now show that R is a first-order consequence of A1, ..., Ak.

Suppose M is a first-order structure in which all of A1, ..., Ak are true.  Then we know that M |= Q -> R and M |= Q, since these sentences are first-order consequences of A1, ..., Ak.  But in that case, by the definition of truth in a structure we see that M |= R as well.  So R is a first-order consequence of A1, ..., Ak.  Hence, step n is a valid step.

Notice that the only difference in this case from the ocrresponding case in the proof of truth-functional soundness is our appeal to first-order structures rather than truth table.  The remaining truth-functional rules are all similar.  Let's now consider a quantifier rule.  

 
<HR><HR><A NAME="1063"></A><H2>1063.  EE </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Theorem: Soundness :: EE </I> ]</UL>
Suppose the nth step derives the sentence R from an application of EE to the sentence ExPx and a subproof containing R at its main level, say at step m.  Let c be the new constant introduced in the subproof.  In other words, Pc is the assumption of the subproof containing R:

	..., ExPx, ..., ( [c] Pc  |-  ...  |-  R  |-  ... ), ...  |-  R

Let A1, ..., Ak be the assumptions in force at the final step with R..  Our inductive hypothesis assures us that steps ExPx and R (in the subproof) are valid steps, hence ExPx is a first-order consequence of the assumptions in force at that step, which are a subset of A1, ..., Ak, and R is a first-order consequence of the assumptions in force at that step, which are a subset of A1, ..., Ak, plus the sentence Pc, the assumption of the subproof in which m occurs.

We need to show that R is a first-order consequence of A1, ..., Ak alone.  To this end, assume that M is a first-order strucure in which each of A1, ..., Ak is true.  We need to show that R is true in M as well.  Since ExPx is a consequence of A1, ..., Ak, whe know that this sentence is also true in M.  Notice that the constant c cannot occur in any of the sentences A1, ..., Ak, ExPx, or R, by the restriction on the choice of temporary names imposed by the EE rule.  Since M |= ExPx, we know that there is an object, say b, in the domain of M that satisfies Px.  Let M' be exactly M, except that it assigns the object b to the individual constant c.  Clearly, M |= Pc, by our choice of interpretation of c.  By Proposition 1 M' |= R, because R is a first-order consequence of these sentences.  Since c does not occur in R, R is also true in the original structure M, again by Proposition 1.  

 
<HR><HR><A NAME="1064"></A><H2>1064.  Skolem Normal Form </H2><UL>[ <I> Metatheory :: Metalogic (Nolt) :: Barwise :: First-Order Logic :: Skolem Normal Form </I> ]</UL>
Often, existential quantifiers can be replaced by function symbols.

Eg.

	AxEy HasNeighbor( x, y )

	meaning, everyone has at least one neighbor.

	But what if we wanted to talk about x's nearest or favorit neighbor.

	Ax HasNeighbor( x, fx )

The new sentence containing a function symbol rather than E is in skolem normal form.  

 
<HR><HR><A NAME="1065"></A><H2>1065.  Conditionals </H2><UL>[ <I> Philosophy :: Nolt :: Conditionals </I> ]</UL>
<H4>Description</H4>                

   <P> The classical definition of the conditional operator (&#8594;) is quite controversial.  This is because it is not a truth functional operator, though it is treated as such.

   </P> <P> By the classical definition, a truth functional operator is one whose truth value is strictly a function of the truth values of the components.  This is precisely the case with negation &#172;, conjunction &#8743; and disjunction &#8744;.  Conditional does not follow this strict definition.  For this reason, the conditional of classical logic is often called the <B>material conditional</B> to set it apart from the way we use conditionals in english.

   </P> <P> The following sections cite examples how conditial is not truely truth function.  

 </P>  
<HR><HR><A NAME="1066"></A><H2>1066.  False antecedent and consequent </H2><UL>[ <I> Philosophy :: Nolt :: Conditionals :: False antecedent and consequent </I> ]</UL>
<H4>Description</H4>                

   <P> By classical logic, when both &#934; and &#936; are false the conditional &#934; &#8594; &#936; is true.  But examine two examples:

<PRE>
   a.   If you are less than an inch tall, then you are less than a foot tall.

   b.   If you have no lungs, then you can breathe with your eyeballs.
</PRE>

   </P> <P> What makes statement a true, is not that it's consequent is false (which is the case according to classical logic), but that it is necessary.  Given that you are less than an inch tall, you <I>must</I> be less than a foot tall.

   </P> <P> Correspondingly, what makes statement b fase seems to be the lack of just such a necessary connection.

   </P> <P> This suggests that in English, the conditional is true if and only if there exists a necessary connection from antecedent to the consequent.  The truth conditions for the material conditional take into account only the truth values of the antecedent and consequent, not the presence or lack of such a necessary connection.  

 </P>  
<HR><HR><A NAME="1067"></A><H2>1067.  False antecedent, true consequent </H2><UL>[ <I> Philosophy :: Nolt :: Conditionals :: False antecedent, true consequent </I> ]</UL>
<H4>Description</H4>                

   <P> This form also results in peculiar differences between the material conditional and the english conditional.  consider,

<PRE>
   a.   If there are no people, then people exist.
</PRE>

   </P> <P> Here the antecedent is false, so according to classical logic the conditional must be true.  But this is crazy.  

 </P>  
<HR><HR><A NAME="1068"></A><H2>1068.  True antecedent and consequent </H2><UL>[ <I> Philosophy :: Nolt :: Conditionals :: True antecedent and consequent </I> ]</UL>
<H4>Description</H4>                

<PRE>
   If the Mississippi contains more than a thimbleful of water, then it is the greatest river in North America.
</PRE>

   <P> Both antecedent and consequent are true, but the conditional is clearly false.  

 </P>  
<HR><HR><A NAME="1069"></A><H2>1069.  Where is truth? </H2><UL>[ <I> Philosophy :: Truth :: Where is truth? </I> ]</UL>
When we speek of truth we implicitly speek of language.  For there is no truth/false in the universe.  The concept of truth only comes into play when we talk about the universe.  So, a fact of the universe is not true or false, it's just the case.  But the fact of a sentence is either true or false depending upon weather it accurately mirrors a fact of the universe.  

 
<HR><HR><A NAME="1070"></A><H2>1070.  Sentence vs Proposition </H2><UL>[ <I> Philosophy :: Truth :: Sentence vs Proposition </I> ]</UL>
A sentence cannot always simply be said to be true or false.  But a proposition can.  The sentence, "I am an American."  Would be true if said by Kennedy but false if said by Khrushchev.  Thus, this sentence is not a proposition because the truth is inexact.  However,  "I Kennedy/Khrushchev, am an American." are clearly propositions since they have clear truth-values.  We may say of the first example that it is not a proposition because it may be used to say many different things (Ambiguous? - my note) conversely, different senteces can be used to say the same thing.  (E.g. "It's raining." or "Es regnet.").

A proposition makes an object out of the notion of what's said or expressed by the utterance of a certain sort of sentence, namely, one in the indicative mood which makes sense and doesn't fail in its references.  

 
<HR><HR><A NAME="1071"></A><H2>1071.  Correspondence Theory </H2><UL>[ <I> Philosophy :: Truth :: Theories :: Correspondence Theory </I> ]</UL>
Truth is a relational concept, like 'ungle', consisting in a relation of correspondence to a fact.  (One becomes an ungle by having a niece or nephew.)  A thought or proposition is true in those cases, and only those cases, where there is a corresponding fact of the matter.  (A man is an uncle in those and only those cases where there is a matching nephew or niece.)  

 
<HR><HR><A NAME="1072"></A><H2>1072.  The Core </H2><UL>[ <I> Philosophy :: Truth :: Theories :: Correspondence Theory :: The Core </I> ]</UL>
Ontonogy:  Implications for existence
Epistemology:  Consequences for knowledge

Facts are independent of our knowledge of them.  

 
<HR><HR><A NAME="1073"></A><H2>1073.  Ontological </H2><UL>[ <I> Philosophy :: Truth :: Theories :: Correspondence Theory :: The Core :: Ontological </I> ]</UL>
The study (theoretical/conceptual rather than scientific) of what sorts of things exist.

eg.

Theories to account for human action and perception (hierarchy)
   Dualist (mind and body)
   Monistic (only mind or body exists)
      Physicalist (Deny a separate mind.  Theory: complex operations of physical matter.)
      Idealist (Deny existence of matter.  Theory:  experience is simply the experiences of minds.)

Correspondence theorist claims there must be more than this.  In addition to individual minds and/or bodies, there must be facts about those minds and/or bodies.  The minds and bodies just are -- in addition, there must be facts about them.  It is the existence of facts which makes true propositions true.  This is a rrealist ontological claim.  It is also epistemologically realist.  

 
<HR><HR><A NAME="1074"></A><H2>1074.  Epistemological </H2><UL>[ <I> Philosophy :: Truth :: Theories :: Correspondence Theory :: The Core :: Epistemological </I> ]</UL>
Consequences for knowledge

Epistemological realism consists in its commitment to what is known as the Law of Bivalence.  By making truth a matter of the existence of a certain sort of object, the theory commits itself to the possibility that propositions may be true or false, but which they are we have no way of determining.  

 
<HR><HR><A NAME="1075"></A><H2>1075.  Law of Bivalence </H2><UL>[ <I> Philosophy :: Truth :: Theories :: Correspondence Theory :: The Core :: Law of Bivalence </I> ]</UL>
Every proposition either has a corresponding fact in which case it's true; or does not in which case it's false.

So truth is dependent upon the existence of a corresponding object.

E.g.

'Oswald killed Kennedy.'  Is a proposition, we may, however, not know it's truth.  We don't know if there is a corresponding fact.  Such a proposition is called 'verification-transcendent'.  

 
<HR><HR><A NAME="1076"></A><H2>1076.  Wittgenstein </H2><UL>[ <I> Philosophy :: Truth :: Theories :: Correspondence Theory :: Problems :: Wittgenstein </I> ]</UL>
Elementary propositions get their meaning by association with-by corresponding to-particular states of affairs.  The visible (or audible) part of a proposition is a sentence, a sequence of signs.  These signs become symbols by an arbitrary act of correlating them with objects.  Facts about those symbols then come conventionally to picture certain states of affairs about the corresponding objets.  Thus at the heart of Wittgenstein's correspondence theory of truth is a picture theory of meaning.  Elementary propositions are facts about names, and thereby picture (or mean) atomic states of affairs, that is, certain combinations of objects.  In gereral, propositions (via the connection at the base level of elementary propositions and atomic facts) picture putative facts, or states of affairs.  

 
<HR><HR><A NAME="1077"></A><H2>1077.  Analytic Statement </H2><UL>[ <I> Philosophy :: Truth :: Analytic/Synthetic :: Analytic Statement </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> An analytic statement is one that is a tautology.

</P>  <H4>Examples</H4><UL>            

   <LI> All bachelors are unmarried.

   </LI> <LI> 2 = 2

   </LI> <LI> 1 > 0

   </LI> <LI> All frogs are frogs.

   </LI> <LI> If everything is green, then this is green.  

 </LI> </UL> 
<HR><HR><A NAME="1078"></A><H2>1078.  Synthetic Statement </H2><UL>[ <I> Philosophy :: Truth :: Analytic/Synthetic :: Synthetic Statement </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A synthetic statement is one that's contingent.

</P>  <H4>Examples</H4><UL>            

   <LI> Daniel is a bachelor.  

 </LI> </UL> 
<HR><HR><A NAME="1079"></A><H2>1079.  a priori/posteriori knowledge </H2><UL>[ <I> Philosophy :: Truth :: How can we know something is true? :: a priori/posteriori knowledge </I> ]</UL>
<H4>Description</H4>                

   <P> Philosophers traditionally distinguish two kinds of knowledge.    

</P>  <H4>Notes</H4><UL>            

   <LI> Most of our knowledge is <I>a posteriori</I>.  Logical and mathematical knowledge is generally <I>a priori</I>.  

 </LI> </UL> 
<HR><HR><A NAME="1080"></A><H2>1080.  A Posteriori </H2><UL>[ <I> Philosophy :: Truth :: How can we know something is true? :: a priori/posteriori knowledge :: A Posteriori </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A posteriori (empirical) knowledge is knowledge based upon sense experience.

</P>  <H4>Examples</H4><UL>            

   <LI> Some bachelors are happy.  

 </LI> </UL> 
<HR><HR><A NAME="1081"></A><H2>1081.  A Priori </H2><UL>[ <I> Philosophy :: Truth :: How can we know something is true? :: a priori/posteriori knowledge :: A Priori </I> ]</UL>
<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A priori (rational) knowledge is knowledge not base on sense experience.

</P>  <H4>Examples</H4><UL>            

   <LI> All bachelors are unmarried.  

 </LI> </UL> 
<HR><HR><A NAME="1082"></A><H2>1082.  Analytic/Synthetic vs. a priori/a posteriori </H2><UL>[ <I> Philosophy :: Truth :: How can we know something is true? :: a priori/posteriori knowledge :: Analytic/Synthetic vs. a priori/a posteriori </I> ]</UL>
<H4>Description</H4>                

   <P> It may appear that a priori propositions can only be analytic and that a posteriori propositions can only be synthetic, but this is not the case.

</P>  <H4>Examples</H4><UL>            

   <LI> "PI is a little over 3."<BR>
   Analytic<BR>
   It can be known a posteriori through measurement.<BR>
   It can be known a priori through calculation.

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> These examples seem to suggest that perhaps any analytic statement that is known a posteriori also could be known a priori.  So, the biggest question is, "Do we have any purely synthetic a priori knowledge?"  

 </LI> </UL> 
<HR><HR><A NAME="1083"></A><H2>1083.  Logical Consequence </H2><UL>[ <I> Philosophy :: Logical Consequence </I> ]</UL>
Overview:

   Clarify what follows from what.

   - What are the criterion by which we judge arguments, and argument-forms, to be valid?
   - What is the correct analysis of logical consequence?

Examples:

Notes:

- Here we are attempting to develop a technique for modeling the human thought process.  This is interresting.  Since it is indeed a (theoretical) model, what do the universe and meta-language look like?  Is 'logical consequence' just the 'laws' of the larger universe of logic?  If so, then saying that it's a 'technique for modeling' is incorrect.  

 
<HR><HR><A NAME="1084"></A><H2>1084.  Classical view </H2><UL>[ <I> Philosophy :: Logical Consequence :: Classical view </I> ]</UL>
Overview:

Valid logical arguments are a matter of form.

   Claims:

   - All instances of a valid form are valid arguments.
   - Any argument which does not confirm to a valid form is invalid.

Examples:

   Any argument of the form

      Fa.
      All Fs are G.
      So, Ga

   is valid.

Notes:  

 
<HR><HR><A NAME="1085"></A><H2>1085.  Truth Preservation </H2><UL>[ <I> Philosophy :: Logical Consequence :: Classical view :: Fundamental Reasoning :: Truth Preservation </I> ]</UL>
Definition:

If there exists at least one instance of an argument-form where all premises are true and the conclusion is false, then the argument-form is invalid.

Related Terms:

Examples:

Notes:  

 
<HR><HR><A NAME="1086"></A><H2>1086.  Non-Classical Formal Logics </H2><UL>[ <I> Non-Classical Formal Logics </I> ]</UL>
<H4>Description</H4>                

   <P> Non-classical logics dare question the list of virtues and the virtues themselves.  

 </P>  
<HR><HR><A NAME="1087"></A><H2>1087.  Logics and Validity </H2><UL>[ <I> Non-Classical Formal Logics :: Logics and Validity </I> ]</UL>
<H4>Description</H4>                

   <P> The definition of logic and validity presented so far is that of 'Classical Logic'.  However, the study of logic entails far more.  This section provides brief descriiptions of some of those areas of study.  

 </P>  
<HR><HR><A NAME="1088"></A><H2>1088.  Relevance </H2><UL>[ <I> Non-Classical Formal Logics :: Logics and Validity :: Relevance </I> ]</UL>
<H4>Alternate Names</H4><UL>            

   <LI> Premise Relevance

</LI> </UL> <H4>Description</H4>                

   <P> Classical deductive logic has absolutely no concern for relevance in arguments.  For the most part, this is not an issue.  But there are two cases where classical logic produces what it considers "good" arguments, which are in fact peculiar.

</P>  <H4>Anything from Contradiction</H4>                

   <P> P &#8743; &#172;P   &#8870;   Q

   </P> <P> Notice that the premises are completely irrelevant to the conclusion, though this is a perfectly good argument.

</P>  <H4>Tautologous Conclusion</H4>                

   <P> P, Q   &#8870;   R &#8744; &#172;R

   </P> <P> Notice that no matter what premises are inserted into the sequent, relevant or not, the argument is good.

</P>  <H4>Tautology from Contradiction</H4>                

   <P> P &#8743; &#172;P  &#8870;  R &#8744; &#172;R

   </P> <P> As if the first two cases weren't astonishing enough, we find that this third form (a consequence of either of the first two) is also a valid argument form.  

 </P>  
<HR><HR><A NAME="1089"></A><H2>1089.  Imperative Logic </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic </I> ]</UL>
<H4>Description</H4>                

   <P> Note that Imperative Logic is not a classical logic.  However,   

 </P>  
<HR><HR><A NAME="1090"></A><H2>1090.  Overview </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Overview </I> ]</UL>
<H4>Description</H4>                

   <P> Imperative logic studies arguments whose validity depends on imperatives (commands).  This breaks away from traditional logic since traditionally command has no truth value and thus, is not a proposition.  

 </P>  
<HR><HR><A NAME="1091"></A><H2>1091.  Imperative </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Lexical Elements :: Imperative </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> <u>x</u>

   </LI> <LI> !x

</LI> </UL> <H4>Description</H4>                

   <P> Any proposition symbol, object or bound variable (in a predicate) may be underlined to indicate 'who' is being told to do something.

</P>  <H4>Notes</H4><UL>            

   <LI> Symbol !x (or <U>x</U>) is distinct from x.  A calculus may not freely change forms without a rule or axiom.  

 </LI> </UL> 
<HR><HR><A NAME="1092"></A><H2>1092.  Formation Rules </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Well-Formed Formulas (WFF's) :: Formation Rules </I> ]</UL>
<H4></H4><OL>            

   <LI> Any Proposition Symbol, underlined Proposition Symbol, or n-place Predicate followed by n object symbols (one of which may be underlined), is a WFF.

</LI> </OL> <H4>Notes</H4><UL>            

   <LI> The above rule may replace the first rule for either PL wff's or FOL wff's.  

 </LI> </UL> 
<HR><HR><A NAME="1093"></A><H2>1093.  "Do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "Do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>A</u>

   </LI> <LI> A<u>u</u>  

 </LI> </UL> 
<HR><HR><A NAME="1094"></A><H2>1094.  "Don't do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "Don't do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;<u>A</u>  

 </LI> </UL> 
<HR><HR><A NAME="1095"></A><H2>1095.  "Do A and B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "Do A and B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>A</u> &#8743; <u>B</u>  

 </LI> </UL> 
<HR><HR><A NAME="1096"></A><H2>1096.  "Do A or B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "Do A or B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>A</u> &#8744; <u>B</u>  

 </LI> </UL> 
<HR><HR><A NAME="1097"></A><H2>1097.  "Don't do either A or B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "Don't do either A or B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;(<u>A</u> &#8744; <u>B</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1098"></A><H2>1098.  "Don't both do A and do B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "Don't both do A and do B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;(<u>A</u> &#8743; <u>B</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1099"></A><H2>1099.  "Don't combine doing A with doing B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "Don't combine doing A with doing B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;(<u>A</u> &#8743; <u>B</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1100"></A><H2>1100.  "Don't combine doing A with not doing B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "Don't combine doing A with not doing B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;(<u>A</u> &#8743; <u>B</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1101"></A><H2>1101.  "Don't do A without doing B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "Don't do A without doing B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;(<u>A</u> &#8743; &#172;<u>B</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1102"></A><H2>1102.  "You're doing A and you're doing B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "You're doing A and you're doing B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> A &#8743; B  

 </LI> </UL> 
<HR><HR><A NAME="1103"></A><H2>1103.  "You're doing A, but do B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "You're doing A, but do B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> A &#8743; <u>B</u>  

 </LI> </UL> 
<HR><HR><A NAME="1104"></A><H2>1104.  "Do A and B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "Do A and B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>A</u> &#8743; <u>B</u>  

 </LI> </UL> 
<HR><HR><A NAME="1105"></A><H2>1105.  "If you're doing A, then you're doing B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "If you're doing A, then you're doing B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> A &#8594; B  

 </LI> </UL> 
<HR><HR><A NAME="1106"></A><H2>1106.  "If you (in fact) are doing A, then do B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "If you (in fact) are doing A, then do B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> A &#8594; <u>B</u>  

 </LI> </UL> 
<HR><HR><A NAME="1107"></A><H2>1107.  "Do A, only if you (in fact) are doing B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "Do A, only if you (in fact) are doing B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>A</u> &#8594; B  

 </LI> </UL> 
<HR><HR><A NAME="1108"></A><H2>1108.  "If you (in fact) are doing A, then don't do B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "If you (in fact) are doing A, then don't do B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> A &#8594; &#172;<u>B</u>  

 </LI> </UL> 
<HR><HR><A NAME="1109"></A><H2>1109.  "Don't combine doing A with doing B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: Propositional :: "Don't combine doing A with doing B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;(<u>A</u> &#8743; <u>B</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1110"></A><H2>1110.  "X, do (or be) A." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: First-Order :: "X, do (or be) A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> A<u>x</u>  

 </LI> </UL> 
<HR><HR><A NAME="1111"></A><H2>1111.  "X, do A to Y." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: First-Order :: "X, do A to Y." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> A<u>x</u>y  

 </LI> </UL> 
<HR><HR><A NAME="1112"></A><H2>1112.  "Everyone does A." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: First-Order :: "Everyone does A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> .Ax.Ax  

 </LI> </UL> 
<HR><HR><A NAME="1113"></A><H2>1113.  "Let everyone do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: First-Order :: "Let everyone do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> .Ax&#8704;!x.  

 </LI> </UL> 
<HR><HR><A NAME="1114"></A><H2>1114.  "Let everyone who (in fact) is doing A do B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: First-Order :: "Let everyone who (in fact) is doing A do B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> .Ax.(Ax &#8594; B<u>x</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1115"></A><H2>1115.  "Let someone who (in fact) is doing A do B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: First-Order :: "Let someone who (in fact) is doing A do B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> .Ex.(Ax &#8743; B<u>x</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1116"></A><H2>1116.  "Let someone both do A and do B." </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Syntax :: Formalization Hints :: First-Order :: "Let someone both do A and do B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8707;x(A<u>x</u> &#8743; B<u>x</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1117"></A><H2>1117.  Calculus </H2><UL>[ <I> Non-Classical Formal Logics :: Imperative Logic :: Inference Theory :: Calculus </I> ]</UL>
<H4>Description</H4>                

   <P> No new inference rules are needed beyond those needed by the logic without Imperative components.  But we must be certain that we treat 'A' and '<u>A</u>' as distinct symbols.  Thus, 'A' and '&#172;<u>A</u>' are not contradictories.

   </P> <P> Imperative logic does introduce a problem with the definition of a valid argument.  For, what's the truth value of a proposition such as 'A'?  The definition of an argument invalidates arguments concluding in imperatives.

</P>  <H4>Validity</H4>                

   <P> An argument is <I>valid</I> if the conjunction of its premises with the contradictory of its conclusion is inconsistent.  

 </P>  
<HR><HR><A NAME="1118"></A><H2>1118.  Hybrid Logics </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics </I> ]</UL>
<H4>Description</H4>                

   <P> A hybrid logic is a logic that combines two or more logics such that the inference rules (and/or axioms) consider the interaction between the systems.  A system that uses two or more logics does <I>not</I> consider the interaction between the separate logics is not a hybrid system.  For example, First Order Logic is usually used in combination with the Propositional logic (that is, a single proposition may contain both quantifiers and truth-functional operators) but is not a hybrid logic because each inference rule is either purely First Order Logic or purely Propositional.  We could turn it into a hybrid logic with the addition of an inference rule such as the following:

<UL>
   If P is a one place predicate and a and c are constants, then for any proposition of the form ((Pa &#8743; Pc) &#8743; &#172;a=c), we can infer &#8704;xPx.
</UL>

   @ Clearly this inference rule is nonsensical but its addition to a First Order Logic makes the logic a hybrid because the rule allows an interaction between the two logics being used.
  

 </P>  
<HR><HR><A NAME="1119"></A><H2>1119.  Overview </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Overview </I> ]</UL>
<H4>Description</H4>                

   <P> Deontic logic studies arguments whose validity depends on 'ought', 'permissible' and similar notions.  Deontic logic is used in combination with Imperative logic.  

 </P>  
<HR><HR><A NAME="1120"></A><H2>1120.  Imperative </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: Lexical Elements :: Imperative </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> <u>x</u>

   </LI> <LI> !x

</LI> </UL> <H4>Description</H4>                

   <P> Any proposition symbol, object or bound variable (in a predicate) may be underlined to indicate 'who' is being told to do something.

</P>  <H4>Notes</H4><UL>            

   <LI> Symbol !x (or <U>x</U>) is distinct from x.  A calculus may not freely change forms without a rule or axiom.  

 </LI> </UL> 
<HR><HR><A NAME="1121"></A><H2>1121.  Obligatory </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: Lexical Elements :: Obligatory </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> O&#934;

</LI> </UL> <H4>Translation</H4>                

   <P> It ought to be that &#934;.  

 </P>  
<HR><HR><A NAME="1122"></A><H2>1122.  Permissible </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: Lexical Elements :: Permissible </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> R&#934;

</LI> </UL> <H4>Translation</H4>                

   <P> It's possible/permissible/all-right that &#934;.  

 </P>  
<HR><HR><A NAME="1123"></A><H2>1123.  Formation Rules </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: Well Formed Formulas (WFFs) :: Formation Rules </I> ]</UL>
<H4></H4><OL>            

   <LI> If &#934; is a wff, then O&#934; and R&#934; are wffs.

   </LI> <LI> Any Proposition Symbol, underlined Proposition Symbol, or n-place Predicate followed by n object symbols (one of which may be underlined), is a WFF.
  

 </LI> </OL> 
<HR><HR><A NAME="1124"></A><H2>1124.  "It's obligatory that A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "It's obligatory that A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O<u>A</u>  

 </LI> </UL> 
<HR><HR><A NAME="1125"></A><H2>1125.  "X ought to do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "X ought to do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> OA<u>x</u>  

 </LI> </UL> 
<HR><HR><A NAME="1126"></A><H2>1126.  "X ought to do A to Y." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "X ought to do A to Y." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> OA<u>x</u>y  

 </LI> </UL> 
<HR><HR><A NAME="1127"></A><H2>1127.  "It's permissible that A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "It's permissible that A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> RA  

 </LI> </UL> 
<HR><HR><A NAME="1128"></A><H2>1128.  "It's all right for X to do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "It's all right for X to do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> RA<u>x</u>  

 </LI> </UL> 
<HR><HR><A NAME="1129"></A><H2>1129.  "It's all right for X to do A to Y." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "It's all right for X to do A to Y." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> RA<u>x</u>y  

 </LI> </UL> 
<HR><HR><A NAME="1130"></A><H2>1130.  "Act A is wrong." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "Act A is wrong." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;R<u>A</u>

   </LI> <LI> O&#172;<u>A</u>  

 </LI> </UL> 
<HR><HR><A NAME="1131"></A><H2>1131.  "Act A isn't all right." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "Act A isn't all right." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;R<u>A</u>  

 </LI> </UL> 
<HR><HR><A NAME="1132"></A><H2>1132.  "Act A ought not to be done." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "Act A ought not to be done." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#172;<u>A</u>  

 </LI> </UL> 
<HR><HR><A NAME="1133"></A><H2>1133.  "It ought to be that A and B." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "It ought to be that A and B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O(<u>A</u> &#8743; <u>B</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1134"></A><H2>1134.  "It's all right that A or B." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "It's all right that A or B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> R(<u>A</u> &#8744; <u>B</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1135"></A><H2>1135.  "If you do A, then you ought not to do B." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "If you do A, then you ought not to do B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> A &#8594; O&#172;<u>B</u>  

 </LI> </UL> 
<HR><HR><A NAME="1136"></A><H2>1136.  "You ought not to combine doing A with doing B." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "You ought not to combine doing A with doing B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#172;(<u>A</u> &#8743; <u>B</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1137"></A><H2>1137.  "It's obligatory that everyone do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "It's obligatory that everyone do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#8704;xA<u>x</u>  

 </LI> </UL> 
<HR><HR><A NAME="1138"></A><H2>1138.  "It's permissible to do A and B." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: Propositional :: "It's permissible to do A and B." </I> ]</UL>
<H4>Description</H4>                

   <P> This form is ambiguous.  Consider this instance:

<PRE>
   It's permissible to marry Ann and Beth.
</PRE>

   This can be interpreted in two ways.  Either it's permissible to marry them both (bigamy) or one of them.

</P>  <H4></H4><UL>            

   <LI> R(<u>A</u> &#8743; <u>B</u>), Bigamy is permissible.

   </LI> <LI> R<u>A</u> &#8743; R<u>B</u>, We can mary either one.  

 </LI> </UL> 
<HR><HR><A NAME="1139"></A><H2>1139.  "It isn't obligatory that everyone do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: First-Order :: "It isn't obligatory that everyone do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;O&#8704;xA<u>x</u>  

 </LI> </UL> 
<HR><HR><A NAME="1140"></A><H2>1140.  "it's obligatory that not everyone do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: First-Order :: "it's obligatory that not everyone do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#172;&#8704;xA<u>x</u>  

 </LI> </UL> 
<HR><HR><A NAME="1141"></A><H2>1141.  "It's obligatory that everyone refrain from doing A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: First-Order :: "It's obligatory that everyone refrain from doing A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#8704;x&#172;A<u>x</u>  

 </LI> </UL> 
<HR><HR><A NAME="1142"></A><H2>1142.  "It's obligatory that someone answer the phone." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: First-Order :: "It's obligatory that someone answer the phone." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#8707;xA<u>x</u>  

 </LI> </UL> 
<HR><HR><A NAME="1143"></A><H2>1143.  "There's someone who has the obligation to answer the phone." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: First-Order :: "There's someone who has the obligation to answer the phone." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8707;xOA<u>x</u>  

 </LI> </UL> 
<HR><HR><A NAME="1144"></A><H2>1144.  "It's obligatory that some who kill repent." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: First-Order :: "It's obligatory that some who kill repent." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#8707;x(Kx &#8743; R<u>x</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1145"></A><H2>1145.  "It's obligatory that some kill who repent." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: First-Order :: "It's obligatory that some kill who repent." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#8707;x(K<u>x</u> &#8743; Rx)  

 </LI> </UL> 
<HR><HR><A NAME="1146"></A><H2>1146.  "It's obligatory that some both kill and repent." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Syntax :: formalization Hints :: First-Order :: "It's obligatory that some both kill and repent." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#8707;x(K<u>x</u> &#8743; R<u>x</u>)  

 </LI> </UL> 
<HR><HR><A NAME="1147"></A><H2>1147.  Semantics </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Language :: Semantics </I> ]</UL>
'Ought' is meant in the sense of 'all things considered, it ought to be that P.'

There are at least two uses which differ from this:

-  I ought to take you to the movies since I promised.
(This would definitely be overridden by, 'ought to take my wife to the hospital.' )

- Ought to wear a tie to work since it's mandatory.  

 
<HR><HR><A NAME="1148"></A><H2>1148.  Reverse Squiggle (RS) </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Inference Theory :: Calculus :: Inference Rules :: Reverse Squiggle (RS) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;O&#934;  &#8870;  R&#172;&#934;

   </LI> <LI> &#172;R&#934;  &#8870;  O&#172;&#934;  

 </LI> </UL> 
<HR><HR><A NAME="1149"></A><H2>1149.  R Elimination (R E) </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Inference Theory :: Calculus :: Inference Rules :: R Elimination (R E) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> R<u>A</u>   &#8870;   [Dn:]  A<BR>
   This rule is analogous to &#9671;E .  We must create a 'deontic world'.  

 </LI> </UL> 
<HR><HR><A NAME="1150"></A><H2>1150.  O Elimination (O E) </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Inference Theory :: Calculus :: Inference Rules :: O Elimination (O E) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O<u>A</u>   &#8870;   Dn:   <u>A</u><BR>
   This is analagous to &#9633;E .  We can drop into any deontic world.  

 </LI> </UL> 
<HR><HR><A NAME="1151"></A><H2>1151.  Hare's Law (Prescriptivity) (HL) </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Inference Theory :: Calculus :: Inference Rules :: Hare's Law (Prescriptivity) (HL) </I> ]</UL>
<H4>Description</H4>                

   <P> An ought judgment entails the corresponding imperative:  "You ought to do A" entails "Do A."

</P>  <H4>Forms</H4><UL>            

   <LI> &#9633;( .O.<u>A</u>  &#8594;  <u>A</u> )

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Hare's Law just makes it possible to drop O (OE) into the same world.

   </LI> <LI> It also says that the proposition:  "you ought to do it, but don't"  is inconsistent.

   </LI> <LI> This law fails from some weaker descriptive senses of 'ought'.  e.g.  'You ought (according to company policy) to do it, but don't do it."

   </LI> <LI> This law seems to hold for the all-things-considered, normative sense of 'ought'; this seems inconsistent:  "All things considered, you ought to do it but don't do it."

   </LI> <LI>  Some philosophers reject Hare's law.  

 </LI> </UL> 
<HR><HR><A NAME="1152"></A><H2>1152.  Other Laws </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Inference Theory :: Calculus :: Other Laws </I> ]</UL>
<H4>Description</H4>                

   <P> These laws are not part of Gensler's logic because they under-generate.  (They exclude valid cases.)  

 </P>  
<HR><HR><A NAME="1153"></A><H2>1153.  Hume's Law </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Inference Theory :: Calculus :: Other Laws :: Hume's Law </I> ]</UL>
<H4>Description</H4>                

   <P> "You can't deduce an "ought" from and "is"."

   </P> <P> David Hume claimed that we can't validly deduce what we ought to do from premises that don't contain "ought" or similar notions.  Hume's Law fails for descriptive uses of "ought" but seems to hold for the all-things-considered sense.

   </P> <P> "You can't deduce an "ought" from an "is":  If B is a consistent non-evaluative statement and A is a simple contingent action, then B doesn't entail "Act A ought to be done.""

   </P> <P> &#172;N( B  &#8594;  O<u>A</u> )

   </P> <P> This complex wording sidesteps some trivial cases where we clearly can deduce an "ought" from an "is."  

 </P>  
<HR><HR><A NAME="1154"></A><H2>1154.  Poincare's Law </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Inference Theory :: Calculus :: Other Laws :: Poincare's Law </I> ]</UL>
<H4>Description</H4>                

   <P> "You can't deduce an imperative from an "is"."

   </P> <P> Jules Henri Poincare claims that we can't validly deduce an imperative from indicative premises that don't contain "ought" or similar notions.

   </P> <P> "You can't deduce an imperative from an "is":  If B is a consisten non-evaluative statement and A is a simple contingent action, then B doesn't entail the imperative "Do act A."

   </P> <P> &#172;N( B  &#8594;  <u>A</u> )

   </P> <P> This complex wording block trivial cases where   

 </P>  
<HR><HR><A NAME="1155"></A><H2>1155.  Laws that add Alethic Logic </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Inference Theory :: Calculus :: Laws that add Alethic Logic </I> ]</UL>
<H4>Description</H4>                

   <P> This works fine as long as we nest deontic worlds in possible worlds or vice versa.  

 </P>  
<HR><HR><A NAME="1156"></A><H2>1156.  Kant's Law (KL) </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Inference Theory :: Calculus :: Laws that add Alethic Logic :: Kant's Law (KL) </I> ]</UL>
<H4>Description</H4>                

   <P> "Ought" implies "can":  "You ought to do A" entails "It's possible for you to do A."

</P>  <H4>Forms</H4><UL>            

   <LI> O<u>A</u>   &#8870;   PA

</LI> </UL> <H4>Notes</H4><UL>            

   <LI> Can we also write this as:  &#9633;( O<u>A</u>  &#8594;  PA ) ?

   </LI> <LI> KL also claims that "You ought to do it, but it's impossible" is inconsistent.

   </LI> <LI> KL fails for descriptive senses of "ought"; since company policy may require impossible things, there's no inconsistency in this: "You ought (according to company policy) to do it, but it's impossible."

   </LI> <LI> KL holds for all-things-considered uses of "ought".

   </LI> <LI> It's a bit odd to use '&#9671;' for things that we are capable of doing instead of what's logically possible, but it seems to work ok (Gensler, p 201)  

 </LI> </UL> 
<HR><HR><A NAME="1157"></A><H2>1157.  Indicative Transfer (IT) </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Deontic Hybrid :: Inference Theory :: Calculus :: Laws that add Alethic Logic :: Indicative Transfer (IT) </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> Dn: A   &#8870;   A<BR>
   That is, we can freely transfer indicatives between deontic worlds within the same POSSIBLE world.  Eg.  If some indicative proposition is in world W1D1 it cannot be transfered into the real world because that would be going accross possible worlds.  

 </LI> </UL> 
<HR><HR><A NAME="1158"></A><H2>1158.  Overview </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Overview </I> ]</UL>
<H4>Description</H4>                

   <P> Belief logic studies patterns of consistent believing and willing.  Gensler's belief logic works closely with his Imperative logic.  

 </P>  
<HR><HR><A NAME="1159"></A><H2>1159.  Imperative </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Lexical Elements :: Imperative </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> <u>x</u>

   </LI> <LI> !x

</LI> </UL> <H4>Description</H4>                

   <P> Any proposition symbol, object or bound variable (in a predicate) may be underlined to indicate 'who' is being told to do something.

</P>  <H4>Notes</H4><UL>            

   <LI> Symbol !x (or <U>x</U>) is distinct from x.  A calculus may not freely change forms without a rule or axiom.  

 </LI> </UL> 
<HR><HR><A NAME="1160"></A><H2>1160.  Indicative Form </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Lexical Elements :: Epistemological Operator :: Indicative Form </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> x:P<BR>
   Where P is some Proposition Symbol.

</LI> </UL> <H4>Translation</H4>                

   <P> "x accepts the indicative/imperative P."

   </P> <P> This form can also be taken as one of the following:

</P>  <H4></H4><UL>            

   <LI> 'accepts that,'
   </LI> <LI> 'believes that,'
   </LI> <LI> 'assent to the fact that,'
   </LI> <LI> 'say in x's own heart that,'
   </LI> <LI> 'act (in order) to,'
   </LI> <LI> 'desires that,'
   </LI> <LI> 'wants that,'

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> x:P<BR>
   x accepts that, P.<BR>
   x believes that, P.<BR>
   x endorses that, P.<BR>
   x assents to the fact that, P.<BR>
   x says in his/her heart that, P.

   </LI> <LI> x:<u>P</u> <I>or</I> x:P<u>x</u><BR>
   x accepts that, "Do P."<BR>
   x is resolved to, "Do P."<BR>
   x will, "Do P."<BR>
   x acts/will act (in order) to, "Do P."

   </LI> <LI> x:P<u>y</u><BR>
   x accepts, "Let y do P."<BR>
   x wants/desires, "Let y do P."  

 </LI> </UL> 
<HR><HR><A NAME="1161"></A><H2>1161.  Imperative Form </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Lexical Elements :: Epistemological Operator :: Imperative Form </I> ]</UL>
<H4>Notation</H4><UL>            

   <LI> x:P<BR>
   Where P is some Proposition Symbol.

</LI> </UL> <H4>Translation</H4>                

   <P> "x, accept the indicative/imperative P."

   </P> <P> This form can also be taken as one of the following:

</P>  <H4></H4><UL>            

   <LI> 'accept,'
   </LI> <LI> 'believe,'
   </LI> <LI> 'assent to,'
   </LI> <LI> 'say in x's own heart,'
   </LI> <LI> 'act (in order) to,'
   </LI> <LI> 'desire,'
   </LI> <LI> 'want,'

</LI> </UL> <H4>Examples</H4><UL>            

   <LI> <u>x</u>:P<BR>
   x, accept that, P.<BR>
   x, believe that, P.<BR>
   x, endorse that, P.<BR>
   x, assents to the fact that, P.<BR>
   x, say in your heart, P.

   </LI> <LI> <u>x</u>:!P  <I>or</I>  <u>x</u>:P<u>x</u><BR>
   x, accepts, "Do P."<BR>
   x, be resolved to, "Do P."<BR>
   x, will, "Do P."<BR>
   x, act (in order) to, "Do P."

   </LI> <LI> <u>x</u>:P<u>y</u><BR>
   x, accept the imperative that, "y do P."<BR>
   x, wants/desires that, "y do P."  

 </LI> </UL> 
<HR><HR><A NAME="1162"></A><H2>1162.  Formation Rules </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Well Formed Formulas (WFFs) :: Formation Rules </I> ]</UL>
<H4></H4><OL>            

   <LI> If P is a wff and x is a constant object symbol, then x:P and <u>x</u>:P are wffs.  

 </LI> </OL> 
<HR><HR><A NAME="1163"></A><H2>1163.  "You believe that A is true (false)." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "You believe that A is true (false)." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:A

   </LI> <LI> u:&#172;P  

 </LI> </UL> 
<HR><HR><A NAME="1164"></A><H2>1164.  "You don't believe that A is true." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "You don't believe that A is true." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;u:A  

 </LI> </UL> 
<HR><HR><A NAME="1165"></A><H2>1165.  "You don't believe A and you don't believe not-A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "You don't believe A and you don't believe not-A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;u:A &#8743; &#172;u:&#172;A  

 </LI> </UL> 
<HR><HR><A NAME="1166"></A><H2>1166.  "You believe that if A then not-B." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "You believe that if A then not-B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:(A &#8594; &#172;B)  

 </LI> </UL> 
<HR><HR><A NAME="1167"></A><H2>1167.  "If you believe A, then you don't believe B." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "If you believe A, then you don't believe B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:A &#8594; &#172;u:B  

 </LI> </UL> 
<HR><HR><A NAME="1168"></A><H2>1168.  "Don't combine believing A with believing B." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "Don't combine believing A with believing B." </I> ]</UL>
&#172;(B!u:A ^ B!u:B)  

 
<HR><HR><A NAME="1169"></A><H2>1169.  "Believe that A is true (false)." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "Believe that A is true (false)." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>u</u>:A

   </LI> <LI> <u>u</u>:&#172;A  

 </LI> </UL> 
<HR><HR><A NAME="1170"></A><H2>1170.  "Don't believe that A is true." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "Don't believe that A is true." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;u:A  

 </LI> </UL> 
<HR><HR><A NAME="1171"></A><H2>1171.  "If you in fact believe A, the don't believe B." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "If you in fact believe A, the don't believe B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:A &#8594; &#172;<u>u</u>:B  

 </LI> </UL> 
<HR><HR><A NAME="1172"></A><H2>1172.  "You believe that you ought to do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "You believe that you ought to do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:OA<u>u</u>  

 </LI> </UL> 
<HR><HR><A NAME="1173"></A><H2>1173.  "Everyone believes that they ought to do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "Everyone believes that they ought to do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8704;x x:OA<u>x</u>  

 </LI> </UL> 
<HR><HR><A NAME="1174"></A><H2>1174.  "Don't believe A and don't believe not-A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "Don't believe A and don't believe not-A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;<u>u</u>:A &#8743; &#172;<u>u</u>:&#172;A  

 </LI> </UL> 
<HR><HR><A NAME="1175"></A><H2>1175.  "Believe that you ought to do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "Believe that you ought to do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>u</u>:OA<u>u</u>  

 </LI> </UL> 
<HR><HR><A NAME="1176"></A><H2>1176.  "Let everyone believe that they ought to do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "Let everyone believe that they ought to do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#8704;x:<u>x</u>:OA<u>x</u>  

 </LI> </UL> 
<HR><HR><A NAME="1177"></A><H2>1177.  "You accept (endorse, assent to, say in your heart), "A is true."." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Belief Propositions :: "You accept (endorse, assent to, say in your heart), "A is true."." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:A  

 </LI> </UL> 
<HR><HR><A NAME="1178"></A><H2>1178.  "You accept (endorse, assent to, say in your heart), "Let A be done."." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "You accept (endorse, assent to, say in your heart), "Let A be done."." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:<u>A</u>  

 </LI> </UL> 
<HR><HR><A NAME="1179"></A><H2>1179.  "You will (act, resolve to act, desire) that act A be done." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "You will (act, resolve to act, desire) that act A be done." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:<u>A</u>  

 </LI> </UL> 
<HR><HR><A NAME="1180"></A><H2>1180.  "You accept the imperative for you to do A now." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "You accept the imperative for you to do A now." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:<u>A</u>  

 </LI> </UL> 
<HR><HR><A NAME="1181"></A><H2>1181.  "You act (in order) to do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "You act (in order) to do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:<u>A</u>  

 </LI> </UL> 
<HR><HR><A NAME="1182"></A><H2>1182.  "You accept the imperative for you to do A in the future." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "You accept the imperative for you to do A in the future." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:A<u>u</u>  

 </LI> </UL> 
<HR><HR><A NAME="1183"></A><H2>1183.  "You're resolved to do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "You're resolved to do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:A<u>u</u>  

 </LI> </UL> 
<HR><HR><A NAME="1184"></A><H2>1184.  "You accept the imperative for X to do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "You accept the imperative for X to do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:A<u>x</u><BR>
   provided:  u not = x  

 </LI> </UL> 
<HR><HR><A NAME="1185"></A><H2>1185.  "You desire (or want) that X do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "You desire (or want) that X do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> u:A<u>x</u><BR>
   provided:   u not = x.  

 </LI> </UL> 
<HR><HR><A NAME="1186"></A><H2>1186.  "Accept (endorse, assent to, say in your heart) "Let act A be done."" </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "Accept (endorse, assent to, say in your heart) "Let act A be done."" </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>u</u>:<u>A</u>  

 </LI> </UL> 
<HR><HR><A NAME="1187"></A><H2>1187.  "Will that act A be done." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "Will that act A be done." </I> ]</UL>
B!u:!A  

 
<HR><HR><A NAME="1188"></A><H2>1188.  "Accept the imperative for you to do A now." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "Accept the imperative for you to do A now." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>u</u>:A<u>u</u>  

 </LI> </UL> 
<HR><HR><A NAME="1189"></A><H2>1189.  "Act (in order) to do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "Act (in order) to do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>u</u>:A<u>u</u>  

 </LI> </UL> 
<HR><HR><A NAME="1190"></A><H2>1190.  "Accept the imperative for you to do A in the future." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "Accept the imperative for you to do A in the future." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>u</u>:A<u>u</u>  

 </LI> </UL> 
<HR><HR><A NAME="1191"></A><H2>1191.  "Be resolved to do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "Be resolved to do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>u</u>:A<u>u</u>  

 </LI> </UL> 
<HR><HR><A NAME="1192"></A><H2>1192.  "Accept the imperative for X to do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "Accept the imperative for X to do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>u</u>:A<u>u</u><BR>
   providing:  u not = x.  

 </LI> </UL> 
<HR><HR><A NAME="1193"></A><H2>1193.  "Desire (or want) that X do A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Willing Propositions :: "Desire (or want) that X do A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> <u>u</u>:A<u>x</u><BR>
   Providing:  u not = x.  

 </LI> </UL> 
<HR><HR><A NAME="1194"></A><H2>1194.  Contrast of Belief/Willing Forms </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Contrast of Belief/Willing Forms </I> ]</UL>
<H4>General Hints</H4><UL>            

   <LI> Use underlining <I>after</I> the ':' if the sentence is about <I>willing</I>.

   </LI> <LI> Use underlining <I>before</I> ':' to <I>tell</I> someone what to believe or will.

</LI> </UL> <H4>u:A</H4><UL>            

   <LI> Concepts:  believe, endorse, assent, say in your heart

   </LI> <LI> Indicative Form:  u:A<BR>
   you ... that, "A is true."

   </LI> <LI> Imperative Form:  <u>u</u>:A<BR>
   ... that, "A is true."

</LI> </UL> <H4>u:<u>A</u></H4><UL>            

   <LI> Concepts:  will that, endorse, assent to, say in your heart

   </LI> <LI> Indicative Form:  u:<u>A</u><BR>
   You ..., "Let A be done."

   </LI> <LI> Imperative Form:  <u>u</u>:<u>A</u><BR>
   ..., "Let act A be done."<BR>
   Will that A be done.

</LI> </UL> <H4>u:.A!u.</H4><UL>            

   <LI> Concepts:  act in order to, are/be resoved to, accept the imperative for you to, say in your heart

   </LI> <LI> Indicative Form:  u:.A!u.<BR>
   You ... do A (now/in the future).

   </LI> <LI> Imperative Form:  <u>u</u>:A<u>u</u><BR>
   ... do A (now, in the future).<BR>
   Act to do A.

</LI> </UL> <H4>u:A<u>x</u></H4><UL>            

   <LI> Concepts:  desire that X, want that X, accept the imperative for X to

   </LI> <LI> Indicative Form:  u:A<u>x</u><BR>
   You ... do A.

   </LI> <LI> Imperative Form:  <u>u</u>:A<u>x</u><BR>
   ... do A.<BR>
   ...provided:  you not = X  

 </LI> </UL> 
<HR><HR><A NAME="1195"></A><H2>1195.  "A is evident to you." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Rationality Propositions :: "A is evident to you." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O<u>u</u>:A  

 </LI> </UL> 
<HR><HR><A NAME="1196"></A><H2>1196.  "It's obligatory (rationally required) that you believe A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Rationality Propositions :: "It's obligatory (rationally required) that you believe A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O<u>u</u>:A  

 </LI> </UL> 
<HR><HR><A NAME="1197"></A><H2>1197.  Insofar as intellectual considerations are concerned (including your experiences), you ought to believe A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Rationality Propositions :: Insofar as intellectual considerations are concerned (including your experiences), you ought to believe A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O<u>u</u>:A  

 </LI> </UL> 
<HR><HR><A NAME="1198"></A><H2>1198.  "A is reasonable for you to believe." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Rationality Propositions :: "A is reasonable for you to believe." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> R<u>u</u>:A  

 </LI> </UL> 
<HR><HR><A NAME="1199"></A><H2>1199.  "It's all right (rationally permissible) that you believe A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Rationality Propositions :: "It's all right (rationally permissible) that you believe A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> R<u>u</u>:A  

 </LI> </UL> 
<HR><HR><A NAME="1200"></A><H2>1200.  Insofar as intellectual considerations are concerned (including your experiences), it would be all right for you to believe A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Rationality Propositions :: Insofar as intellectual considerations are concerned (including your experiences), it would be all right for you to believe A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> R<u>u</u>:A  

 </LI> </UL> 
<HR><HR><A NAME="1201"></A><H2>1201.  "It would be unreasonable for you to believe A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Rationality Propositions :: "It would be unreasonable for you to believe A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;R<u>u</u>:A  

 </LI> </UL> 
<HR><HR><A NAME="1202"></A><H2>1202.  "It's obligatory that you not believe A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Rationality Propositions :: "It's obligatory that you not believe A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#172;<u>u</u>:A  

 </LI> </UL> 
<HR><HR><A NAME="1203"></A><H2>1203.  "It would be reasonable for you to take no position on A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Rationality Propositions :: "It would be reasonable for you to take no position on A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> R(&#172;<u>u</u>:A &#8743; &#172;<u>u</u>:&#172;A)  

 </LI> </UL> 
<HR><HR><A NAME="1204"></A><H2>1204.  "It's evident to you that if A then B." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Rationality Propositions :: "It's evident to you that if A then B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O<u>u</u>:(A &#8594; B)  

 </LI> </UL> 
<HR><HR><A NAME="1205"></A><H2>1205.  "If it's evident to you that A, then it's evident to you that B." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Rationality Propositions :: "If it's evident to you that A, then it's evident to you that B." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O<u>u</u>:A &#8594; O<u>u</u>:B  

 </LI> </UL> 
<HR><HR><A NAME="1206"></A><H2>1206.  "You ought not to combine believing A with believing not-A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: Rationality Propositions :: "You ought not to combine believing A with believing not-A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O&#172;(<u>u</u>:A &#8743; <u>u</u>:&#172;A)  

 </LI> </UL> 
<HR><HR><A NAME="1207"></A><H2>1207.  "You know that A." </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Syntax :: Formalization Hints :: "You know that A." </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> O<u>u</u>:A &#8743; (A &#8743; u:A)<BR>
   A is evident, A is true and you believe A.  

 </LI> </UL> 
<HR><HR><A NAME="1208"></A><H2>1208.  Calculus (Gensler) </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Inference Theory :: Calculus (Gensler) </I> ]</UL>
<H4>Description</H4>                

   <P> To make such a logic effective, it's necessary to include the premise, "You ought to be consistent." to all arguments.

   </P> <P> The inference rules assure such a premise implicitly.  

 </P>  
<HR><HR><A NAME="1209"></A><H2>1209.  Inference rules </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Inference Theory :: Calculus (Gensler) :: Inference rules </I> ]</UL>
<H4>Description</H4>                

   <P> For the most part, we just create belief worlds.  These inference rules operate on imperative belief formulas not on descriptive ones.  Our belief worlds are about what you are told to believe, not about what you actually believe.  

 </P>  
<HR><HR><A NAME="1210"></A><H2>1210.  <u>x</u>: E </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Inference Theory :: Calculus (Gensler) :: Inference rules :: <u>x</u>: E </I> ]</UL>
<H4>Form</H4><UL>            

   <LI> <u>x</u>A  &#8870;  [Bx:]	   A<BR>
   If x is told to believe A, then all of x's belief worlds have A.<BR>
   "Believe A."  <->  "All belief worlds have A."  

 </LI> </UL> 
<HR><HR><A NAME="1211"></A><H2>1211.  &#172;<u>x</u>: E </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Inference Theory :: Calculus (Gensler) :: Inference rules :: &#172;<u>x</u>: E </I> ]</UL>
<H4>Forms</H4><UL>            

   <LI> &#172;<u>x</u>A  &#8870;  [Bx:]  &#172;A<BR>
   If x is told to refrain from believing A, then some belief world of x has non-A.<BR>
   "Refrain from believing A"  <->  "Not all belief worlds have A."  <->  "some belief worlds have not-A."  

 </LI> </UL> 
<HR><HR><A NAME="1212"></A><H2>1212.  =E </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Inference Theory :: Calculus (Gensler) :: Inference rules :: =E </I> ]</UL>
<H4>Modification</H4>                

   <P> =E cannot be applied to any wff preceded by belief notation (x: <I>or</I> <u>x</u>:).  

 </P>  
<HR><HR><A NAME="1213"></A><H2>1213.  Drop Order </H2><UL>[ <I> Non-Classical Formal Logics :: Hybrid Logics :: Gensler Imperative-Epistemic Hybrid :: Inference Theory :: Calculus (Gensler) :: Hints :: Drop Order </I> ]</UL>
<H4>Description</H4>                

   <P> Doing proofs that combine operators can be confusing.  Drop operators in the following order.

</P>  <H4>First Eliminate These Weak Operators</H4><UL>            

   <LI> &#9671;

   </LI> <LI> &#172;<u>x</u>:

   </LI> <LI> R

   </LI> <LI> &#8707;x

</LI> </UL> <H4>Then Eliminate These Strong Operators</H4><UL>            

   <LI> <u>x</u>:

   </LI> <LI> O:

   </LI> <LI> &#8704;x

</LI> </UL> <H4>Finally Eliminate These</H4><UL>            

   <LI> &#9633;  

 </LI> </UL> 
<HR><HR><A NAME="1214"></A><H2>1214.  Notes </H2><UL>[ <I> Appendecies :: Notes </I> ]</UL>
<H4>Description</H4>                

   <P> The accompanying notes are written in Microsoft Word format.  To download a copy, right click <A HREF="Deduction.doc" TARGET=_top> here </A> and select "Save Link As ..." or "Save Target As ...".  Enjoy.  And remember, comments are welcome.  

 </P>  
<HR><HR><A NAME="1215"></A><H2>1215.  Aristotle of Athens (384 - 322 BC) </H2><UL>[ <I> Appendecies :: Chronology :: Greek Period :: Aristotle of Athens (384 - 322 BC) </I> ]</UL>
<H4>Description</H4>                

   <P> Student of Plato who was in turn student of Socrates.

   </P> <P> Aristotle's predecessors had been interested in the art of constructing persuasive arguments and in techniques for refuting the arguments of others, but it was Aristotle who first devised systematic criteria for analyzing and evaluationg arguments.

   </P> <P> Aristotle's chief accomplishment is called <B>syllogistic logic</B>, a kind of logic in which the fundamental elements are <I>terms</I>, and arguments are evaluated as good or bad depending on how the terms are arranged in the argument.

   </P> <P> Aristotle also deserves credit for originating <B>modal logic</B>, a kind of logic that involves such concepts as possibility, necessity, belief, and doubt.

   </P> <P> In addition, Aristotle catalogued a number of informal fallacies.  

 </P>  
<HR><HR><A NAME="1216"></A><H2>1216.  Term Logic </H2><UL>[ <I> Appendecies :: Chronology :: Greek Period :: Aristotle of Athens (384 - 322 BC) :: Contributions :: Term Logic </I> ]</UL>
<A HREF="#286" TARGET="baseframe">Term Logic</A>  

 
<HR><HR><A NAME="1217"></A><H2>1217.  Chrysippus of Soli (279 - 206 BC) </H2><UL>[ <I> Appendecies :: Chronology :: Greek Period :: Chrysippus of Soli (279 - 206 BC) </I> ]</UL>
<H4>Description</H4>                

   <P> Chrysippus, one of the founders of the Stoic school, developed a logic in which the fundamental elements were <I>whole propositions</i>.  Chrysippus treated every proposition as either true or false and developed rules for determining the truth-value of compound propositions from the truth-values of their components.  In the course of doing so, he laid the foundation for the truth function interpretation of the logical connectives and introduced the notion of natural deduction.  

 </P>  
<HR><HR><A NAME="1218"></A><H2>1218.  Logic of Connectives </H2><UL>[ <I> Appendecies :: Chronology :: Greek Period :: Chrysippus of Soli (279 - 206 BC) :: Contributions :: Logic of Connectives </I> ]</UL>
<A HREF="#383" TARGET="baseframe">Propositional Logic</A>  

 
<HR><HR><A NAME="1219"></A><H2>1219.  Galen (129 - c. 199) </H2><UL>[ <I> Appendecies :: Chronology :: Greek Period :: Galen (129 - c. 199) </I> ]</UL>
<H4>Description</H4>                

Developed the theory of the compound categorical syllogism.  

  
<HR><HR><A NAME="1220"></A><H2>1220.  Compound Categorical syllogisms </H2><UL>[ <I> Appendecies :: Chronology :: Greek Period :: Galen (129 - c. 199) :: Contributions :: Compound Categorical syllogisms </I> ]</UL>
<A HREF="#286" TARGET="baseframe">Term Logic</A>  

 
<HR><HR><A NAME="1221"></A><H2>1221.  Peter Abelard (1079 - 1142) </H2><UL>[ <I> Appendecies :: Chronology :: Medieval Period :: Peter Abelard (1079 - 1142) </I> ]</UL>
<H4>Description</H4>                

   <P> Abelard reconstructed and refined the logic of Aristotle and Chrysippus as communicated by Boethius, and he originated a theory of universals that traced the universal character of general terms to concepts in the mind rather than to "natures" existing outside the mind, as Aristotle had held.  In addition, Abelard distinguished arguments that are valid because of their form from those that are valid because of their content, but he held that only formal validity is the "perfect" or conclusive variety.  

 </P>  
<HR><HR><A NAME="1222"></A><H2>1222.  William Sherwood (c. 1200 - 1271) </H2><UL>[ <I> Appendecies :: Chronology :: Medieval Period :: William Sherwood (c. 1200 - 1271) </I> ]</UL>
<H4>Description</H4>                

   <P> Author of a treatise which contains the first expression of the "Barbara, Celerent, ..." poem for the valid forms of categorical syllogisms.  

 </P>  
<HR><HR><A NAME="1223"></A><H2>1223.  Peter of Spain (c. 1210 - 1277) </H2><UL>[ <I> Appendecies :: Chronology :: Medieval Period :: Peter of Spain (c. 1210 - 1277) </I> ]</UL>
<H4>Description</H4>                

   <P> Author of <I>Summulae Logicales</I> which became the standard textbook in logic for three hundred years.  

 </P>  
<HR><HR><A NAME="1224"></A><H2>1224.  William of Ockham (c. 1285 - 1349) </H2><UL>[ <I> Appendecies :: Chronology :: Medieval Period :: William of Ockham (c. 1285 - 1349) </I> ]</UL>
<H4>Description</H4>                

   <P> Ockham extended the theory of modal logic, conducted an exhaustive study of the forms of valid and invalid syllogisms, and further developed the idea of metalanguage, a higher level language used to discuss linguistic entities such as words, terms and propositions.  

 </P>  
<HR><HR><A NAME="1225"></A><H2>1225.  Age of Rhetoric </H2><UL>[ <I> Appendecies :: Chronology :: Age of Rhetoric </I> ]</UL>
<H4>Description</H4>                

   <P> Toward the middle of the fifteenth century, a reaction set in agains the logic of the Middle Ages.  Rhetoric largely displaced logic as the primary focus of attention, the logic of Chrysippus, which had already begun to lose it unique identity in the Middle Ages, was ignored altogether, and the logic of Aristotle was studied only in highly simplistic presentations.  

 </P>  
<HR><HR><A NAME="1226"></A><H2>1226.  Enlightenment Period </H2><UL>[ <I> Appendecies :: Chronology :: Enlightenment Period </I> ]</UL>
<H4>Description</H4>                

   <P> A reawakening did not occur until two hundred years later through the work of Gottfried Wilhelm Leibniz.  

 </P>  
<HR><HR><A NAME="1227"></A><H2>1227.  Gottfried Wilhelm Leibniz (1646 - 1716) </H2><UL>[ <I> Appendecies :: Chronology :: Enlightenment Period :: Gottfried Wilhelm Leibniz (1646 - 1716) </I> ]</UL>
<H4>Description</H4>                

   <P> Leibniz, a genius in numerous fields, attempted to develop a symbolic language or "calculus" that could be used to settle all forms of disputes, whether in theology, philosophy, or international relations.  As a result of this work, Leibniz is sometimes credited with being the father of symbolic logic.

   </P> <P> Two particularly noteable creations of Leibniz are Leibniz's Law and the method of Reductio ad Absurdum.  

 </P>  
<HR><HR><A NAME="1228"></A><H2>1228.  Identity Symbol </H2><UL>[ <I> Appendecies :: Chronology :: Enlightenment Period :: Gottfried Wilhelm Leibniz (1646 - 1716) :: Contributions :: Identity Symbol </I> ]</UL>
Two things are identical if everything that can be said of the one can be said of the other.  

 
<HR><HR><A NAME="1229"></A><H2>1229.  Leibniz' Laws </H2><UL>[ <I> Appendecies :: Chronology :: Enlightenment Period :: Gottfried Wilhelm Leibniz (1646 - 1716) :: Contributions :: Leibniz' Laws </I> ]</UL>
<H4>See Also</H4><UL>            

   <LI> See <A HREF="#280" TARGET="baseframe">Leibniz' Laws</A>.  

 </LI> </UL> 
<HR><HR><A NAME="1230"></A><H2>1230.  Proof by Contradiction </H2><UL>[ <I> Appendecies :: Chronology :: Enlightenment Period :: Gottfried Wilhelm Leibniz (1646 - 1716) :: Contributions :: Proof by Contradiction </I> ]</UL>
Reductio ad Absurdum  

 
<HR><HR><A NAME="1231"></A><H2>1231.  Bernard Bolzano (1781 - 1848) </H2><UL>[ <I> Appendecies :: Chronology :: Enlightenment Period :: Bernard Bolzano (1781 - 1848) </I> ]</UL>
Leibniz's efforts to symbolize logic were carried into the ninteenth century by Bernard Bolzano.  

 
<HR><HR><A NAME="1232"></A><H2>1232.  Modern Period </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period </I> ]</UL>
With the arrival of the middle of the nineteenth century, logic commenced an extremely rapid period of development that has continued to this day.  Work in symbolic logic was done by a number of philosophers and mathematicians.  

 
<HR><HR><A NAME="1233"></A><H2>1233.  John Stuart Mill (1806 - 1873) </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: John Stuart Mill (1806 - 1873) </I> ]</UL>
Responsible for initiating a revived interest in inductive logic.  Initiating the method that now bears his name.  

 
<HR><HR><A NAME="1234"></A><H2>1234.  George Boole (1815 - 1864) </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: George Boole (1815 - 1864) </I> ]</UL>
Presented an alternate way of interpreting Aristotle's Universal Categorical Propositions which lead to a strictly formal variation in the deductive theory.  

 
<HR><HR><A NAME="1235"></A><H2>1235.  John Venn (1834 - 1923) </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: John Venn (1834 - 1923) </I> ]</UL>
Presented graphical a method for diagraming categorical propositions, and graphically proving categorical syllogisms.  

 
<HR><HR><A NAME="1236"></A><H2>1236.  Charles Saunders Peirce (1839 - 1914) </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Charles Saunders Peirce (1839 - 1914) </I> ]</UL>
Developed a logic of relations, invented symbolic quantifiers, and suggested the truth table method for formulas in propositional logic.  

 
<HR><HR><A NAME="1237"></A><H2>1237.  Emile Post (1897 - 1954) </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Emile Post (1897 - 1954) </I> ]</UL>
Independently developed the truth table method.  

 
<HR><HR><A NAME="1238"></A><H2>1238.  Ludwig Wittgenstein (1889 - 1951) </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Ludwig Wittgenstein (1889 - 1951) </I> ]</UL>
Independently developed the truth table method.  

 
<HR><HR><A NAME="1239"></A><H2>1239.  Tractatus Logico-Philosophicus (Publication) </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Ludwig Wittgenstein (1889 - 1951) :: Tractatus Logico-Philosophicus (Publication) </I> ]</UL>
Conceived of philosophy as an analysis of hidden logical structure - with definitive attacks on Frege and Russell.  Wittgenstein's main interest was always to understand the relation between language, logic and the world.

In traffic court, models were used to represent the actual traffic accident.

He said that language is a picture of the world.

What any picture must have in commin with reality in order to be able to depict it at all is logical form - the<I>form of reality</I>.  

 
<HR><HR><A NAME="1240"></A><H2>1240.  Gottlob Frege (1848 - 1925) </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Gottlob Frege (1848 - 1925) </I> ]</UL>
Laid the foundations of modern mathematical logic.  His Begriffsschrift sets forth the theory of quantification.  

 
<HR><HR><A NAME="1241"></A><H2>1241.  Logic of Connectives </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Gottlob Frege (1848 - 1925) :: 1879 Begriffsschrift (publication, 1879) :: Unified :: Logic of Connectives </I> ]</UL>
see Chrysippus  

 
<HR><HR><A NAME="1242"></A><H2>1242.  Logic of Categories </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Gottlob Frege (1848 - 1925) :: 1879 Begriffsschrift (publication, 1879) :: Unified :: Logic of Categories </I> ]</UL>
See Aristotle  

 
<HR><HR><A NAME="1243"></A><H2>1243.  Context Principle </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Gottlob Frege (1848 - 1925) :: Contributions :: Context Principle </I> ]</UL>
The smallest unit that logic can deal with is a subject-predicate statement, or <I>proposition</I>.  It is only in the context of a proposition as a <I>whole</I> that we know the meanings of the words which compose it.  

 
<HR><HR><A NAME="1244"></A><H2>1244.  Propositional Calculus </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Gottlob Frege (1848 - 1925) :: Contributions :: Propositional Calculus </I> ]</UL>
<H4>See Also</H4><UL>            

   <LI> See <A HREF="#383" TARGET="baseframe">Propositional Logic</A>.

   </LI> <LI> See <A HREF="#555" TARGET="baseframe">Predicate Logic</A>.

   </LI> <LI> See <A HREF="#604" TARGET="baseframe">First-Order Logic</A>.  

 </LI> </UL> 
<HR><HR><A NAME="1245"></A><H2>1245.  Founding Mathematics </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Gottlob Frege (1848 - 1925) :: Founding Mathematics </I> ]</UL>
Frege started the idea that Logic along with set theory could be used to obtain a sound basis for all of mathematics.  

 
<HR><HR><A NAME="1246"></A><H2>1246.  Alfred North Whitehead (1861 - 1947) </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Alfred North Whitehead (1861 - 1947) </I> ]</UL>
Continued Frege's work along with Russell.  

 
<HR><HR><A NAME="1247"></A><H2>1247.  Bertrand Russell (1872-1970) </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Bertrand Russell (1872-1970) </I> ]</UL>
Continued Frege's work along with Whitehead.  Russell's <I>Principia Mathematica</I> attempted to reduce the whole of pure mathematics to logic.  The <I>Principia</I> is the source of much of the symbolism used today.  

 
<HR><HR><A NAME="1248"></A><H2>1248.  Russell's Paradox </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Bertrand Russell (1872-1970) :: Russell's Paradox </I> ]</UL>
Russell discovered a paradox in Set Theory which nullified Frege's efforts to found mathematics in set theory and logic.

1.	Sets may contain other sets.
2.	So, a set may be said to always contain itself.  That is, if A is a set, then 'A is a subset of A' is true.  (see Leibniz' Law).
3.	It's also possible that elements of a set may be sets themselves.
4.	So, what about this:  The set of all sets which are not members of themselves.

Is this last set a member of itself?

This is a contradiction of set theory which means that set theory is not valid.  

 
<HR><HR><A NAME="1249"></A><H2>1249.  Sufrace Grammar </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Bertrand Russell (1872-1970) :: Sufrace Grammar </I> ]</UL>
Russell believed that the surface grammar (school grammar of nouns, verbs and adjectives) hides the true for of a sentence.  Believed that if we could analyse language into a perfect logical structure, then many of the great philosophical problems of the day would disappear.  

 
<HR><HR><A NAME="1250"></A><H2>1250.  Russell's System </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Bertrand Russell (1872-1970) :: Russell's System </I> ]</UL>
The first system which combined all the capabilities of the previous systems.  

 
<HR><HR><A NAME="1251"></A><H2>1251.  Developed Frege's conception of quantifiers </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Bertrand Russell (1872-1970) :: Russell's System :: Developed Frege's conception of quantifiers </I> ]</UL>
This allowed him to distinguish 'all' from 'some' and removed the need to analyse existence as a predicate.  

 
<HR><HR><A NAME="1252"></A><H2>1252.  Kurt Goedel (1906 - 1978) </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Kurt Goedel (1906 - 1978) </I> ]</UL>
Made great contributions to formal logic in helping to develop an understanding of what constitutes a 'correct' system of deduction, and then going on to demonstrate how to prove that a system posses these traits.  

 
<HR><HR><A NAME="1253"></A><H2>1253.  Definition of a Set </H2><UL>[ <I> Appendecies :: Chronology :: Modern Period :: Georg Cantor (1845 - 1918) :: Set Theory :: Definition of a Set </I> ]</UL>
A collection of elements that do not need to have anything in common.  Every collection has a specific number of elements which can be compared with the number of elements in other sets.

Operations on multiple set (AND, OR, NOT, etc.) are closely related to the connectives of logic.  

 
<HR><HR><A NAME="1254"></A><H2>1254.  Logic Symbols </H2><UL>[ <I> Appendecies :: Symbols :: Logic Symbols </I> ]</UL>
<H4></H4>                
<P><PRE>
   &#172;		0172	0x00AC
   &#8743;		8743	0x2227
   &#8744;		8744	0x2228
   &#8594;		8594	0x2192
   &#8596;		8596	0x2194
   &#8704;		8704	0x2200
   &#8707;		8707	0x2203
   &#9671;		9671	0x25C7
   &#9633;		9633	0x25A1
</PRE>  

 </P>  
<HR><HR><A NAME="1255"></A><H2>1255.  Metalogic Symbols </H2><UL>[ <I> Appendecies :: Symbols :: Metalogic Symbols </I> ]</UL>
<H4></H4>                
<P><PRE>
   &#8870;		8870	0x22A6
   &#8871;		8871	0x22A7
   <B>&#8756;</B>		8756	0x2234
   &#8876;		8876	0x22AC
</PRE>  

 </P>  
<HR><HR><A NAME="1256"></A><H2>1256.  Misc. Logic Symbols </H2><UL>[ <I> Appendecies :: Symbols :: Misc. Logic Symbols </I> ]</UL>
<H4></H4>                
<P><PRE>
   &#8868;		8868	0x22A4
   &#8869;		8869	0x22A5
</PRE>  

 </P>  
<HR><HR><A NAME="1257"></A><H2>1257.  Set Theory Symbol </H2><UL>[ <I> Appendecies :: Symbols :: Set Theory Symbol </I> ]</UL>
<H4></H4>                
<P><PRE>
   &#215;		0215	0x00D7
   &#8709;		8709	0x2205
   &#8712;		8712	0x2208
   &#8713;		8713	0x2209
   &#8745;		8745	0x2229
   &#8746;		8746	0x222A
   &#8834;		8834	0x2282
   &#8836;		8836	0x2284
   &#8838;		8838	0x2288
   &#8840;		8840	0x2288
   &#8835;		8835	0x2283
   &#8837;		8837	0x2285
   &#8839;		8839	0x2287
   &#8841;		8841	0x2289
</PRE>  

 </P>  
<HR><HR><A NAME="1258"></A><H2>1258.  Relational & Definition Symbols </H2><UL>[ <I> Appendecies :: Symbols :: Relational & Definition Symbols </I> ]</UL>
<H4></H4>                
<P><PRE>
   &#8801;		8801	0x2261
   &#8804;		8804	0x2264
   &#8805;		8805	0x2265
   &#8797;		8797	0x225D
</PRE>  

 </P>  
<HR><HR><A NAME="1259"></A><H2>1259.  Greek Alphabet </H2><UL>[ <I> Appendecies :: Symbols :: Greek Alphabet </I> ]</UL>
<H4></H4>                
<P><PRE>
   &#913;		0913	0x0391
   &#914;		0914	0x0392
   &#915;		0915	0x0393
   &#916;		0916	0x0394
   &#917;		0917	0x0395
   &#918;		0918	0x0396
   &#919;		0919	0x0397
   &#920;		0920	0x0398
   &#921;		0921	0x0399
   &#922;		0922	0x039A
   &#923;		0923	0x039B
   &#924;		0924	0x039C
   &#925;		0925	0x039D
   &#926;		0926	0x039E
   &#927;		0927	0x039F
   &#928;		0928	0x03A0
   &#929;		0929	0x03A1
   &#931;		0931	0x03A3
   &#932;		0932	0x03A4
   &#933;		0933	0x03A5
   &#934;		0934	0x03A6
   &#935;		0935	0x03A7
   &#936;		0936	0x03A8
   &#937;		0937	0x03A9

   &#945;		0945	0x03B1
   &#946;		0946	0x03B2
   &#947;		0947	0x03B3
   &#948;		0948	0x03B4
   &#949;		0949	0x03B5
   &#950;		0950	0x03B6
   &#951;		0951	0x03B7
   &#952;		0952	0x03B8
   &#953;		0953	0x03B9
   &#954;		0954	0x03BA
   &#955;		0955	0x03BB
   &#956;		0956	0x03BC
   &#957;		0957	0x03BD
   &#958;		0958	0x03BE
   &#959;		0959	0x03BF
   &#960;		0960	0x03C0
   &#961;		0961	0x03C1
   &#962;		0962	0x03C2
   &#963;		0963	0x03C3
   &#964;		0964	0x03C4
   &#965;		0965	0x03C5
   &#966;		0966	0x03C6
   &#967;		0967	0x03C7
   &#968;		0968	0x03C8
   &#969;		0969	0x03C9
</PRE>  

 </P>  
<HR><HR><A NAME="1260"></A><H2>1260.  Baggini, Julian </H2><UL>[ <I> Appendecies :: References :: Baggini, Julian </I> ]</UL>
<H4></H4>                
<P> Baggini, Julian, and Peter S. Fosl.  <U>The Philosopher's Toolkit: A Compendium of Philosophical concepts and Methods</U>.  Malden: Blackwell Publishing, 2003.  

 </P>  
<HR><HR><A NAME="1261"></A><H2>1261.  Barwise, Jon </H2><UL>[ <I> Appendecies :: References :: Barwise, Jon </I> ]</UL>
<H4></H4>                

<P> Barwise, Jon, et al.  <U>Language Proof and Logic</U>.  New York:  CSLI Publications, 2000.

</P> <P> <A HREF="http://www-csli.stanford.edu/hp/" TARGET=_top> See website </A>  

 </P>  
<HR><HR><A NAME="1262"></A><H2>1262.  Carnap, Rudolf </H2><UL>[ <I> Appendecies :: References :: Carnap, Rudolf </I> ]</UL>
<H4></H4>                
<P> Carnap, Rudolf.  <U> Introduction to Symbolic Logic and it's Applications </U>.  Trans. William H. Meyer and John Wilkinson.  New York, 1958.  

 </P>  
<HR><HR><A NAME="1263"></A><H2>1263.  Copi, Irving M. </H2><UL>[ <I> Appendecies :: References :: Copi, Irving M. </I> ]</UL>
<H4></H4>                

<P> Copi, Irving M., and Carl Cohen.  <U>Introcuction to Logic</U>.  11th ed.  Upper Saddle River: Prentice Hall, 2002.  

 </P>  
<HR><HR><A NAME="1264"></A><H2>1264.  Gensler, Harry J. </H2><UL>[ <I> Appendecies :: References :: Gensler, Harry J. </I> ]</UL>
<H4></H4>                

<P> Gensler, Harry J.  <U>Introduction To Logic</U>.  New York:  Routledge, 2002

</P> <P> <A HREF="http://www.jcu.edu/philosophy/gensler/" TARGET=_top> View homepage. </A>  

 </P>  
<HR><HR><A NAME="1265"></A><H2>1265.  Goble, Lou </H2><UL>[ <I> Appendecies :: References :: Goble, Lou </I> ]</UL>
<H4></H4>                

<P> Goble, Lou, ed.  <U> The Blackwell Guide To Philosophical Logic</U>.  Malden:  Blackwell Publishers, 2001.  

 </P>  
<HR><HR><A NAME="1266"></A><H2>1266.  Hunter, Geoffrey </H2><UL>[ <I> Appendecies :: References :: Hunter, Geoffrey </I> ]</UL>
<H4></H4>                

<P> Hunter, Geoffrey.  <U>Metalogic: An Introduction to The Metatheory of Standard First Order Logic</U>.  Berkeley:  U of California P, 1996.  

 </P>  
<HR><HR><A NAME="1267"></A><H2>1267.  Hurley, Patrick J. </H2><UL>[ <I> Appendecies :: References :: Hurley, Patrick J. </I> ]</UL>
<H4></H4>                

<P> Hurley, Patrick J. <U>A Concise Introduction to Logic</U>.  8th ed.  Belmont: Wadsworth, 2003.  

 </P>  
<HR><HR><A NAME="1268"></A><H2>1268.  Lemmon, E. J. </H2><UL>[ <I> Appendecies :: References :: Lemmon, E. J. </I> ]</UL>
<H4></H4>                

<P> Lemmon, E. J.  Beginning Logic.  Indianapolis:  Hacket Publishing Company, 1965.  

 </P>  
<HR><HR><A NAME="1269"></A><H2>1269.  Nolt, John </H2><UL>[ <I> Appendecies :: References :: Nolt, John </I> ]</UL>
<H4></H4>                

<P> Nolt, John, Dennit Rohatyn, and Achille Varzi.  <U>Schaum's Outlines of Theory and Problems of Logic</U>.  2nd ed.  New York: McGraw-Hill, 1998.  

 </P>  
<HR><HR><A NAME="1270"></A><H2>1270.  Nolt, John </H2><UL>[ <I> Appendecies :: References :: Nolt, John </I> ]</UL>
<H4></H4>                

<P> Nolt, John.  <U>Logics</U>.  Belmont: Wadsworth, 1997.  

 </P>  
<HR><HR><A NAME="1271"></A><H2>1271.  Priest, Graham </H2><UL>[ <I> Appendecies :: References :: Priest, Graham </I> ]</UL>
<H4></H4>                

<P> Priest, Graham.  <U>An Introduction to Non-Classical Logic</U>.  Cambridge: Cambridge UP, 2001.  

 </P>  
<HR><HR><A NAME="1272"></A><H2>1272.  Read, Stephen </H2><UL>[ <I> Appendecies :: References :: Read, Stephen </I> ]</UL>
<H4></H4>                

<P> Read, Stephen.  <U>Thinking About Logic: An Introduction To The Philosophy of Logic</U>.  New York:  Oxford UP, 1995.  

 </P>  
<HR><HR><A NAME="1273"></A><H2>1273.  Smith, Robin </H2><UL>[ <I> Appendecies :: References :: Smith, Robin </I> ]</UL>
<H4></H4>                

   <P> <A HREF="http://www.seop.leeds.ac.uk/entries/aristotle-logic/" TARGET=_top> View the article. </A>  

 </P>  
<HR><HR><A NAME="1274"></A><H2>1274.  Tarski, Alfred </H2><UL>[ <I> Appendecies :: References :: Tarski, Alfred </I> ]</UL>
<H4></H4>                

<P> Tarski, Alfred.  <U>Introduction To Logic And To The Methodology of Deductive Sciences</U>.  1946. Trans. Olaf Helmer. New York:  Dover Publications, 1995.  

 </P>  
<HR><HR><A NAME="1275"></A><H2>1275.  Affirm </H2><UL>[ <I> Appendecies :: Glossary :: Affirm </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4><UL>            

   <LI>   To verify that something is true.  

 </LI> </UL> 
<HR><HR><A NAME="1276"></A><H2>1276.  Algorithm </H2><UL>[ <I> Appendecies :: Glossary :: Algorithm </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> An <B>algorithm</B> is a fully determinate computational procedure.  

 </P>  
<HR><HR><A NAME="1277"></A><H2>1277.  Analyze </H2><UL>[ <I> Appendecies :: Glossary :: Analyze </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> To analyze something is to break it down into its components and study their nature and relationships.

</P>  <H4>Related Terms</H4><UL>            

   <LI> <A HREF="#1297" TARGET="baseframe">Synthesize</A>, <I>antonym</I>.

   </LI> <LI> <A HREF="#1289" TARGET="baseframe">Evaluate</A>, <I>Mistakenly used to mean...</I>.  

 </LI> </UL> 
<HR><HR><A NAME="1278"></A><H2>1278.  Arity </H2><UL>[ <I> Appendecies :: Glossary :: Arity </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> From the ending of the words 'unary' and 'binary'.  Arity is the name of the property that names the number of arguments, positions or operands that a construct such as an operator requires.

</P>  <H4>Examples</H4><UL>            

   <LI> &#172;P, negation has an arity of 1.

   </LI> <LI> P &#8743; Q, conjunction has an arity of 2.

</LI> </UL> <H4>Related Terms</H4><UL>            

   <LI> Arity of 1 (aka, unary or monadic), takes one argument

   </LI> <LI> Arity of 2 (aka, binary or diadic), takes two arguments

   </LI> <LI> Arity of 3 (aka, trinary or triadic), takes three arguments  

 </LI> </UL> 
<HR><HR><A NAME="1279"></A><H2>1279.  Assent </H2><UL>[ <I> Appendecies :: Glossary :: Assent </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> To agree with a proposition.  

 </P>  
<HR><HR><A NAME="1280"></A><H2>1280.  Assert </H2><UL>[ <I> Appendecies :: Glossary :: Assert </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> To state or declare positively.  

 </P>  
<HR><HR><A NAME="1281"></A><H2>1281.  Calculus </H2><UL>[ <I> Appendecies :: Glossary :: Calculus </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A calculus is a system for the manipulation of a symbolic language.  In logic, we use calculi for the derivation of conclusions from premises.  

 </P>  
<HR><HR><A NAME="1282"></A><H2>1282.  Classical Logic </H2><UL>[ <I> Appendecies :: Glossary :: Classical Logic </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Classical Logic is a term which refers to deductive logic in the traditional sense of the word.  This is 

</P>  <H4> </H4><UL>            

   <LI> Bivalance (every proposition has exactly one truth value, true or false).  First defined by Leibniz with his Laws or Axioms.

   </LI> <LI> The Logical Operators are defined in terms of the valuations of the operands.  (This leads to concern over the 'conditional'.  For whose truth table a counter example can be made for all four cases.  In reality, the english 'if-then' is not <I>material</I>, but <I>necessary</I>.)

   </LI> <LI> The definition of a deductive argument:  It is impossible for the conclusion to be false while all the premises are true.'  (This definition does not take into consideration relevance.  Although in most cases, the normal rules of deduction entail relevance, there are two exceptions, Anything from Contradiction, and Tautology from Anything.  

 </LI> </UL> 
<HR><HR><A NAME="1283"></A><H2>1283.  Conjecture </H2><UL>[ <I> Appendecies :: Glossary :: Conjecture </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Similar to a Hypothesis.  A conjecture is an educated guess which current knowledge deems untestable.  

 </P>  
<HR><HR><A NAME="1284"></A><H2>1284.  Consequence </H2><UL>[ <I> Appendecies :: Glossary :: Consequence </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A propsotion Q is a <B>consequence</B> of a set of propositions P if and only if Q is a conclusion of P.  

 </P>  
<HR><HR><A NAME="1285"></A><H2>1285.  Decidable </H2><UL>[ <I> Appendecies :: Glossary :: Decidable </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A logic is <B>decidable</B> iff there is a terminating algorithm which determines for each sequent of the logic whether or not it is valid.  

 </P>  
<HR><HR><A NAME="1286"></A><H2>1286.  Deny </H2><UL>[ <I> Appendecies :: Glossary :: Deny </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> To say that a proposition is false.  

 </P>  
<HR><HR><A NAME="1287"></A><H2>1287.  Discourse </H2><UL>[ <I> Appendecies :: Glossary :: Discourse </I> ]</UL>

<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A sequence of sentences which flow through a train of thought like links down a chain.  

 </P>  
<HR><HR><A NAME="1288"></A><H2>1288.  Dissent </H2><UL>[ <I> Appendecies :: Glossary :: Dissent </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> To disagree with a proposition.  

 </P>  
<HR><HR><A NAME="1289"></A><H2>1289.  Evaluate </H2><UL>[ <I> Appendecies :: Glossary :: Evaluate </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> To evaluate something is to determine the significance, worth, or condition of it usually by careful appraisal and study.

</P>  <H4>Related Terms</H4><UL>            

   <LI> <A HREF="#1277" TARGET="baseframe">Analyze</A>, <I>Often mistaken for...</I>  

 </LI> </UL> 
<HR><HR><A NAME="1290"></A><H2>1290.  Hypothesis </H2><UL>[ <I> Appendecies :: Glossary :: Hypothesis </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Similar to a conjecture.  A educated guess that can be tested.  A hypothesis is usually in the form of a proposition.  

 </P>  
<HR><HR><A NAME="1291"></A><H2>1291.  Lemma </H2><UL>[ <I> Appendecies :: Glossary :: Lemma </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A <B>lemma</B> is similar to a proposition or theorem in that it is proven.  However, what it proves is of little value by itself.  Its value is only when its conclusion is used as an intermediate result for a more important proof.  

 </P>  
<HR><HR><A NAME="1292"></A><H2>1292.  Logical Independence </H2><UL>[ <I> Appendecies :: Glossary :: Logical Independence </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Given an arbitrary argument form a set of propositions is <B>Logically Independent</B> iff all valuations on the form represent possible situations.

</P>  <H4>Examples</H4><UL>            

   <LI> <PRE>

Argument Form:  P &#8594; Q, P  &#8870;  Q

P   Q   |   P &#8594; Q   P   Q
F   F   |   T       F   F
F   T   |   T       F   T
T   F   |   F       T   F
T   T   |   T       T   T

If we let:
   P, It is Saturday
   Q, It is raining.

We find that despite what the argument tells us, we can find situations that contradict any of the conclusions.  We can find
</PRE>  

 </LI> </UL> 
<HR><HR><A NAME="1293"></A><H2>1293.  Paradigm </H2><UL>[ <I> Appendecies :: Glossary :: Paradigm </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> The generally accepted perspective of a particular discipline at a given time.  

 </P>  
<HR><HR><A NAME="1294"></A><H2>1294.  QED </H2><UL>[ <I> Appendecies :: Glossary :: QED </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Latin <I>quod erat demonstrandum</I>, which was to be proved.  Often used at the end of a completed proof.  

 </P>  
<HR><HR><A NAME="1295"></A><H2>1295.  Symbolic Logic </H2><UL>[ <I> Appendecies :: Glossary :: Symbolic Logic </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Any logic which is studied using an "algebra-like" notation such as the formal Frege-Russelle logics and formal non-classical logics; this does not include all formal logics such as Term Logic which is sometimes mixed symbolic/english.  

 </P>  
<HR><HR><A NAME="1296"></A><H2>1296.  Syncategorematic </H2><UL>[ <I> Appendecies :: Glossary :: Syncategorematic </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> (Adjective) Forming a meaningful expression only in conjunction with a denotative expression (as a content word)  

 </P>  
<HR><HR><A NAME="1297"></A><H2>1297.  Synthesize </H2><UL>[ <I> Appendecies :: Glossary :: Synthesize </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

 <H4>Related Terms</H4><UL>            

   <LI> <A HREF="#1277" TARGET="baseframe">Analyze</A>, <I>antonym</I>.  

 </LI> </UL> 
<HR><HR><A NAME="1298"></A><H2>1298.  Theorem </H2><UL>[ <I> Appendecies :: Glossary :: Theorem </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> A proposition is a theorem in a system S if and only iff the proposition is a necessary consequence of S itself.  No external conditions (premises) are needed.

</P>  <H4>Description</H4>                

   <P> The sequent for a theorem &#934; has the form, '&#8870;  &#934;'.  It has no premises, since a sequent with premises defines the conditions under which some conclusion is true.  

 </P>  
<HR><HR><A NAME="1299"></A><H2>1299.  Theory </H2><UL>[ <I> Appendecies :: Glossary :: Theory </I> ]</UL>


<H4><IMG SRC="img/meaningIcon.jpg" ALIGN="bottom">Definition</H4>                

   <P> Define the 'logical closure' of a set of propositions as the set of all propositions which logically follow from those propositions (theorems).

   </P> <P> So, a theory is any logically closed set of propositions.

   </P> <P> A theory encompasses all its logical consequences.  A theory is 'consistent' if it contains no proposition and it's negation.  A theory is trivial if it contains every proposition.  So, any inconsistent theory is trivial.  

 </P>  
