[om-list] Mark and Tom on logic

Luke Call lacall at onemodel.org
Wed Mar 14 09:01:28 EST 2001


Interesting stuff from Mark and Tom, included with permission.

Message-ID: <001f01c0a5b8$b77d94c0$6402a8c0 at thoughtform.com> From: Tom 
and other Packers <TomP at Burgoyne.Com> To: Mark Butler 
<butlerm at middle.net>, "Jared (h) Norman"
	 <dtfbti at yahoo.com> Cc: "Roger Bishop (h) Jones" <rbjones at rbjones.com>, 
Luke Call
	 <lcall at pobox.com>, Rex Butler <rexbutler at hotmail.com>, Lee Howard
	 <redder at deanox.com>, "Jeremy (w) Almond" <Jeremy at Thoughtform.Com>,
	"Doug (w) Adams" <Dougy at MVSC.Com>, "Chris (h) Angell"
	 <christoph at middle.net>, "Ron (ws) Peterson" <RPeterson at Weber.Edu> 
Subject: Re: Nature of the Laws of Nature
Date: Mon, 5 Mar 2001 14:09:01 -0700
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.5.2653.19)
Content-Type: text/plain;
	charset="iso-8859-1"

Mark

     I had a minor break-through last night as I was trying to go to sleep.
I'm writing about it now in "Higher-Level Mathesis", which is supposed to be
based on "Foundational Mathesis" (a.k.a. "The MetaMathesis Essay"), so I'll
have to go back to that before I can really say that the current essay is
done well.  But I will probably ask you to read the current essay because it
explains what I was trying to explain vocally when you came over.  It's all
so complicated, and I'm so poor at speaking, that I feel I must resort to
writing, and let you critique that.

     Dr. Peterson, this will turn into the theoretical foundation for my
senior project, by the way, so ... you may be interested in this
discussion -- assuming you have any time to waste.  Also, you will find an
interesting discussion of the pragmatics of natural language processing
www.cyc.com.

     In a nut-shell, there are four types of inference, two inductive, and
two deductive.

     "True, i.e. Complete Induction" moves from many specific premises of
some level of certainty, and arrives at few (or one) general conclusions of
the *same* level of certainty -- always.  I.e., the same amount of
information, nothing injected.

     "Assumptive, i.e. Incomplete Induction" moves from many (but relatively
fewer) premises of some level of certainty, and arrives at few (or one)
general conclusions of a lower level of certainty.  It relies on assumptions
based on apparent patterns in the data.  Most of the inductive inferences,
as called such, are of this kind; that's why induction has been given such
as bad name.  But this basic process need not be so uncertain.

     "True, i.e. Complete Deduction" moves from few general premises of some
level of certainty, and arrives at many specific conclusions (one at a time,
after repeated application of the same algorithm) of the same level of
certainty.  Most of the deductive inferences, as called such, are of this
kind; that's why deduction has been given such a good name.  But this basic
process need not be so certain.

     "Assumptive, i.e. Incomplete Deduction" moves from few (relatively
fewer) general premises of some level of certainty, and arrives at specific
conclusions of a lower level of certainty.  It relies on assumptions based
on the fortune that, though the combination of premises does not constrain
possibility space down to an exact conclusion, it does constrain it enough
that each potential conclusion remaining has some fairly high-level of
certainty.  I'll give you examples of this later, but they are found all
over the place, in the "common sense reasoning" of every average Joe on the
planet.

     I go into more detail in "Higher-Level Mathesis", and will also go into
a lot more detail in "Foundational Mathesis".

     From what I've heard of Claude Shannon and information theory, I love
it.  I will look further into it some day.

     I believe that in both true deduction and true induction, no 
information
is
gained or lost.  It looks like you believe the same thing.

     I still call true induction, "induction", because of its form.  If we
generalise it, it can be algorithmically applied using what appears to be
deduction.  This is what happened, in history, when counting turned into
addition.  It went from a process of induction to a deductive algorithm as
it went from what I call "infra-relational" or "arelational" to what I call
"relational", in form.  A similar shift occurred when arithmetic was
sublimated into algebra, by generalisation.  I plan to do a similar thing
with all of induction, by making it algorithmic through generalisation.
This could be the essence of an optimal machine learning (AI) program.

     I don't think we can say that any logic, including inductive logic, can
inject information -- perhaps by definition.  When you infer a conclusion
from the premises, you are not adding information.  Inductive logic should
be this way, just as much as deductive logic.  Therefore, I keep the term
"induction inference" to represent those special cases of induction when no
information is added and no certainty is lost -- the form of the conclusion
is just simplified.  And, as I tried to describe above, both induction and
*deduction* have an assumptive form, in which information is injected, or
assumed, and certainty is lost.

     As far as seemingly-unaided human knowledge goes, I think what we have
(in science) is best described by the regularist perspective, although I do
believe that not all of the "assumptions" we make when we use "assumptive
inference" are false.  I believe that the Light of Christ is found in these
processes, including intuition, etc.  It's crazy to believe that so many
sudden break-throughs in thinking and creativity over the centuries were
accidental.

     I'm glad we see so nearly eye-to-eye on this subject.

ciao,
tomp

----- Original Message -----
From: "Mark Butler" <butlerm at middle.net> To: "Thomas L. Packer" 
<TomP at burgoyne.com> Cc: "Luke Call" <lcall at pobox.com>; "Rex Butler" 
<rexbutler at hotmail.com>;
"Lee Howard" <redder at deanox.com> Sent: Monday, March 05, 2001 1:26 AM
Subject: Nature of the Laws of Nature


Hello Tom,

   We talked briefly yesterday about the nature of the kind of truths
that are discoverable through the process of induction.  I think the
following article summarizes some relevant issues very well:

  http://www.utm.edu/research/iep/l/lawofnat.htm

Necessitarianism:  Fundamental non-contingent physical laws constrain
reality.

Regularism: All physical laws are correct descriptions of reality, but
do not constrain it.  In a sense they are "accidentally true"

 >From a doctrinal context, I think both the Necessitarian and the
Regularlist philosophies can be upheld with the proper context. Because
of D&C 88:13, I lean towards a modified form of the latter with God
enforcing natural laws in such a way that makes the Universe hard to
distingish from a purely Neccessitarian one.

However, without God, the Regularist model strains credibility.
According to most Regularists there is no mechanism for laws of nature
to be enforced, but rather they just happen to be correct descriptions
of reality, literally accidental, or trivially true.  The question of
the origin of these  universal laws is ignored.

Now, in regards to models of inference, I think you need to classify
very carefully the types of statements that you are dealing with.  Are
the conclusions leading towards inescapable, nomologically true laws of
nature or are they just accidentally true, trivial descriptions of it?

Standard induction is completely dependent on probability, and the
probability of the universal truth of inductive conclusions varies
dramatically with the existence or enforcement of universal physical
laws.

I think you ought to look into information theory, particularly the work
by Claude Shannon.  It seems to me that information theory can tell you
a critical property of any form of inference, that is whether the
conclusion contains any new information or not.

Shannon developed this theory for use in communications technology, but
it has a wide range of other applications.  For example, in the analysis
of lossless compression.  Information theory would say that in general
the size of the smallest bit stream that you can reversibly compress
another bit stream down to corresponds to the information contained with
in the original bit stream.

Another key idea is that the essence of communication is to convey bits
that the receiver does not know and has no way of knowing.  Any
information transmitted that the receiver is already aware of must be
discounted when calculating the efficiency of a communication system.

Now, from an information theoretic perspective, you have to ask
yourself, how much information does the conclusion of a process of
inference transmit to a receiver who knows all of the premises.  I
submit that in the case of deduction, the answer is zero.   In
deduction, all we do is rewrite premises according to logical rules and
discard whatever is irrelevant.

Then there is the form of induction used to infer accidental truths.
i.e. I like ice cream and you like ice cream => We both like ice cream.
In that case, we have not injected any new information either.  Rather
we have rewritten the premises in a way that makes the resulting
statement shorter and using rules indistinguishable from those used to
deductive inference.  Essentially, we have compressed the statements
into a shorter one that contains the same amount of information.

Now on the other hand, if I say, A is mortal and B is mortal and C is
mortal => all men are mortal, then my conclusion is not reachable
through the rules of deductive inference, but rather require a leap of
faith of some kind.  The new information represented in the conclusion
is evidence of that.  We no longer have an equivalent or abbreviated
message, but rather have posited  new information nowhere to be found in
the original.

That is why I say that defining deduction as moving from generals to
specifics and induction as the opposite is not a very useful
distinction.  Information conserving or information generating forms of
inference would be a much better categorization.

Deduction is always information conserving - the conclusion contains no
new information not in the premises. Trivial induction is information
conserving as well.  Non-trivial induction is always information
generating - which makes it simultaneously uncertain and powerful.

The second law of thermodynamics, from a statistical point of view, says
that at best you can only maintain the current level of information you
know about a system.  In general, entropy (what you do not know)
increases over time as your statistical ignorance of minor details
spreads throughout the system.

Non-trivial induction is like a rational attempt to violate the second
law of thermodynamics, essentially to use incomplete patterns to project
order onto a system where there was none before.  If the patterns are an
illusion or a coincidence, this effort will surely be in vain - but if
the patterns are shadows of a deeper order, then you end up having more
information about the details of the system than you had when you
started.

The epistemological issue that has troubled philosophers for centuries
about induction is how do you tell the difference?  You can perform
controlled experiments, but no matter how hard you try, induction alone
can only strictly give you statistical characterizations of past events
- i.e. the regularist view.

The jump from correct descriptions of the past to necessary or enforced
constraints about the future, on the other hand, is a matter of faith.
The
additional information injected to make induction a useful process is
pure inspiration - usually confirmable by experiment, or the world of
science would never get anywhere.

- Mark

--
Mark Butler        ( butlerm at middle.net )
Software Engineer
Epic Systems
(801)-451-4583




-- 
Help us put all knowledge in one bucket: www.onemodel.org.





More information about the om-list mailing list