[om-list] OM version 0.02

Jeremy Almond jeremy at thoughtform.com
Fri Feb 27 15:22:34 EST 2004


How is "Artificial Intelligence: A Modern Approach" by  Russell and Norvig
compared to that AI book you used up at Weber State?


Jeremy Almond
Data Analyst
Thoughtform Corp.
Tel: 801-299-1285
Toll Free: 800-854-5272
Fax: 801-299-1294
jeremy at thoughtform.com
jeremy at neuroinsight.com
jeremy at seismicinsight.com


----- Original Message ----- 
From: "Thomas L. Packer at home" <ThomasAndMegan at Middle.Net>
To: "OM List" <om-list at onemodel.org>
Cc: "Thomas and Megan (h) Packer" <ThomasAndMegan at Middle.Net>
Sent: Wednesday, February 25, 2004 8:59 PM
Subject: Re: [om-list] OM version 0.02


> Hello OM People
>
>     I highly recommend "Artificial Intelligence: A Modern Approach" by
> Russell and Norvig.  It is supposedly the best introductory AI text
around,
> and I believe it.  It adds unity to the field of AI by expressing all the
> disparate ideas people think of as part of AI in terms of agents.
>
>     As one example, I was amazed when I came to the part about decision
> theory as applied to AI and designing intelligent agents.  This is my very
> own pet theory, something I didn't think anyone else thought about, and I
> then find that it is already well established in the field of AI, though
> expressed
> in slightly different words than I use.  In my words: actions should be
> based on the correct combination of epistemology, ontology and axiology.
In
> their words, The Principle of Maximum Expected Utility: "An agent is
> rational if and only if it chooses the action that yields the highest
> expected utility, averaged over all possible outcomes of the action."
> Considering all the little parameters that can still be tweaked inside
this
> principle, I consider this to be a unifying principle of AI, in addition
to
> the idea of intelligent agents, and something that can apply to any
> "intelligent" program written, even OM.  In fact, it should probably apply
> to people's decisions as well.
>
>     I wish I had gone to a school that actually had a good computer
> science program earlier.  Even though BYU's CS department doesn't do a
whole
> lot with knowledge modelling and such things, it is still a good enough
> school that I have learned a lot of useful things about what I want to do
> for a career and/or research.  The issues seem a lot clearer when you
> study from better sources.
>
>     For example, (an example that is even more relevant to OM): I never
> quite understood the issues involved in trying to model arbitrary
knowledge
> in a useful way.  Here is just one issue I have learned recently:
>
>     If you want to invent a modelling language that is more expressive
than
> first order logic (predicate calculus), which is the standard from which
few
> people are diverging very far, then you will also need to make it usable
by
> also inventing a *complete* and *sound* inference procedure, something
> analogous to what people use in such systems as Prolog and theorem
provers,
> i.e. something analogous to Modes Ponens or Resolution.
>
>     Complete means that any idea that is true and can be expressed in your
> knowledge base and which is consistent with your knowledge can be inferred
> from your knowledge base.  --  And then you have to deal with Gödel's
> Incompleteness Theorem which says that this is actually impossible if you
> have a sufficiently expressive language (on that includes the principle of
> "mathematical induction").
>
>     If you want to try to beat this game by shifting from analytical
> inference to synthetic inference (to induction from deduction) and want to
> learn knew information, as in using machine learning, then you have to
come
> to terms with the "No Free Lunch" Theorems which say that it is impossible
> to write a program that is sufficiently general-purpose so that it can
learn
> arbitrary information.  That is because the only way to induce
generalities
> from a finite amount of data is to make assumptions (machine learning
calls
> these "learning biases" and every learner has at least one).  These
> assumptions can be true and useful for some areas of information, but they
> will necessarily be false for others.
>
>     Sorry to sound so pessimistic, but I have started to view my original
> goal, and the goal of OM, as possibly being too idealistic -- and perhaps
> impossible.  But I will continue to think about it.  And we should
certainly
> keep in mind the goal of usefulness: what do we want to do, and what will
we
> be able to do, with the knowledge once it is all in the box?
>
>     My currently planned approach to the problem of modelling knowledge is
> to tackle the input problem: natural language (a good source of
knowledge):
> make a program able to learn language and express the information it reads
> as a knowledge base written in a language that can represent arbitrary
> information.
>
>     But now that I think about it, I have already decided that there is an
> equivalence between inference and definition.  That is to say, there is a
> correspondence between the two ways of inferring new information from old
> (through induction and deduction) and the two ways of representing new
> knowledge in terms of old (through intensional and extensional
definitions).
> This strongly suggests to me that it may be impossible to represent
> arbitrary information, which definitely says something bad for OM if this
is
> right.  In fact, it may say that my goal at developing an interlingua is
> doomed to failure.  I am not sure.
>
>     I am totally re-thinking my research plans right now, trying to find a
> more practical research goal -- one that is more likely to be possible,
but
> one that is still interesting and useful.
>
>     We will see.
> tomp
>
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Omnia apud me Mathesis fiunt.
> www.Ontolog.Com
>
> It is a paradoxical but profoundly true
> and important principle of life that the
> most likely way to reach a goal is to be
> aiming not at that goal itself but at
> some more ambitious goal beyond it.
>   -- Arnold Toynbee
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
>
> ----- Original Message ----- 
> From: "Luke Call" <lacall at onemodel.org>
> To: <om-list at onemodel.org>
> Sent: 2004.Feb.25/Wed 06:46
> Subject: [om-list] OM version 0.02
>
>
> I have released OM version 0.02 to the web site:
> http://www.onemodel.org/devel/index.html
>
> It still doesn't do much but adds minimal support for relationships and
> some code cleanup. The relationships stuff is in the data model but only
> partial UI support so far, and not really well tested. I thought I'd
> post something before I add minimal "action" support (actor, action,
> acted upon), so one could enter the equivalent of "we ate dinner" in the
> system. Someday the action could represent a script which updates info
> in the model (or even mini-simulations), but probably not right away.
>
> I still need to go back & review some of Mark's earlier emails.
>
> It's glacial, but progressing.
>
> I hope all of you and yours are very well.
>
> If anyone ever looks at the code & has feedback that would be great, but
> no worries. :)
>
> Best,
> Luke
>
> _______________________________________________
> om-list mailing list
> om-list at onemodel.org
> http://six.pairlist.net/mailman/listinfo/om-list
>
> _______________________________________________
> om-list mailing list
> om-list at onemodel.org
> http://six.pairlist.net/mailman/listinfo/om-list




More information about the om-list mailing list