[XML-DEV Mailing List Archive Home] [By Thread] [By Date] [Recent Entries] [Reply To This Message]

RE: First Order Logic and Semantic Web RE: NPR, Godel, Semantic W eb

  • From: "Bullard, Claude L (Len)" <clbullar@i...>
  • To: Joel Rees <rees@m...>, Jeff Lowery <jlowery@s...>
  • Date: Wed, 16 May 2001 11:09:20 -0500

first order logic solutions
Sort of.  We call it XML.  Namespaces make it 
a little goofy but that is because they still
can't figure out why the lawyers are laughing 
at them.  The lawyers know what the sw engineers 
want to avoid because sort of like the X-Files, 
they understand the concept of higher authority 
then the geeks at MIT.  

But...

The aspect of XML so innate that we tend to 
overlook it after awhile (fades into ye 
olde gestalt background) is that we use 
markup to *precisely* annotate text.  Otherwise, 
HTML would do the job.  This goes back to 
the Quality of Service and Quality of Source 
issues.  Someone using a standard vocabulary 
to markup text has a rather good chance of 
increasing the precision of indexing engines. 
When you look at all the nine yards of 
statistical junk  (proximity, frequency, 
cooccurrence, etc) that an automated indexing 
engine needs to classify a document, then 
compare that to the single datum of knowing 
it's DOCTYPE, you see that things improve a 
lot with regards to assertion checking about 
the content.  For a system that answers questions, 
the same quality properties are there.

When it was suggested early in the XML rhubarb 
that DTDs would go away, (well-formed only), 
I laughed.  It removes the biggest advantage 
of SGML:  standard vocabularies for focused 
domains, the easy means to annotate a text with inline 
metainformation for interpretation.  Now people 
are defending DTDs against the next new thing 
and so it goes, but the principle remains:  once 
you get beyond a simple message, well-formedness 
isn't enough.  You need the metadata to get around 
the outrageous and inefficient noise reduction 
techniques of open text searching.

IOW, a well-marked document source is a primary 
key to the use of the source particularly with 
regards to interpretation.  As in the example 
I pointed out earlier, it is a heckuva lot 
better to know something is marked as a point-of-view 
vs a fact.  The URIness of it might be used to 
tell you who did that.  You may have a history 
with terms originating from that URI and over 
time, you may develop trust or distrust of the
source.   This system can still be 'gamed' but 
it is hard to sustain.  There will be questions 
it can't answer because the facts don't close 
the query.   Rumors depend on anonymity.  
Mission critical operations aren't committed to 
rumor-filled transactions.   So again, we are 
back to operational solutions, choosing sources 
well, rules to disregard non-closing queries, 
("no" and "I don't know" are perfectly good answers) 
etc.  

No magic but experience.

There were lots and lots of genCoded languages 
before HTML, some much better done.  It thrived 
on free software, colonization, and the naivete 
of the users.  That is a historical occurrence 
like the Beatles, sweet, cute, right place at 
the right time and unlikely to happen again. 
XML is a distillation of all the work done 
in markup to date.   It also won't be reproduced. 
It still requires skill to apply well.  The 
semantic web designs partake of all the AI 
work done since the fifties and all of the 
work in bibliographic systems since the middle 
ages.  We have the experience.   We don't have 
practice at this scale and for that reason if 
no other, I suggest that local domains based 
on common vocabularies will initially do the 
heavy lifting.   Standard vocabularies, concept 
maps (eg, topic maps) etc. improve the situation 
immeasurably because the system can know if 
using the term "instrument" in a query to 
ask about financial institutions vs music stores.

When fly-by-wire guidance systems were first 
introduced (an expert system for airliners) 
they scared the designers witless.  In fact, 
some of them did fly jets into the tarmac 
and there were horrendous accidents (chaos 
outs complexity and real time systems courses 
don't treat chaos theory lightly).  However, 
everytime you get on an Airbus and cross the 
Atlantic, a bot is at the controls with a 
human pilot manager.   So, with experience, 
it can be done.

Just don't fly on the first one.

Len 
http://www.mp3.com/LenBullard

Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h


-----Original Message-----
From: Joel Rees [mailto:rees@s...]

> The web is an amplifier.  Deal with it accordingly.

Brings up another question. Has the SW team produced any concrete means of
dealing with the authority issues?

PURCHASE STYLUS STUDIO ONLINE TODAY!

Purchasing Stylus Studio from our online shop is Easy, Secure and Value Priced!

Buy Stylus Studio Now

Download The World's Best XML IDE!

Accelerate XML development with our award-winning XML IDE - Download a free trial today!

Don't miss another message! Subscribe to this list today.
Email
First Name
Last Name
Company
Subscribe in XML format
RSS 2.0
Atom 0.3
 

Stylus Studio has published XML-DEV in RSS and ATOM formats, enabling users to easily subcribe to the list from their preferred news reader application.


Stylus Studio Sponsored Links are added links designed to provide related and additional information to the visitors of this website. they were not included by the author in the initial post. To view the content without the Sponsor Links please click here.

Site Map | Privacy Policy | Terms of Use | Trademarks
Free Stylus Studio XML Training:
W3C Member
Stylus Studio® and DataDirect XQuery ™are products from DataDirect Technologies, is a registered trademark of Progress Software Corporation, in the U.S. and other countries. © 2004-2013 All Rights Reserved.