[XML-DEV Mailing List Archive Home] [By Thread] [By Date] [Recent Entries] [Reply To This Message] RE: Statistical vs "semantic web" approaches to makingsenseof
On Thu, 24 Apr 2003, Danny Ayers wrote: > By coincidence I've been writing up a semi-refutation of Cory's 'metacrap' > piece, hopefully ready in a day or so. i'd be interested to see that. my initial reaction to this piece was 'crap'! can't help it, but i think it should be obvious that all his arguments apply equally well to data as it does to metadata. there seems to be an underlying view that anything done by a machine - set-top boxes for TV stats or google for metadata - is almost by definition better and more reliable than anything produced by a human. "Google can derive statistics about the number of Web-authors who believe that that page is important enough to link to, and hence make extremely reliable guesses about how reputable the information on that page is." really? my friend freddy's got a website with links to the most unreliable sites on the web. how does that affect google's 'reputability' scoring? maybe the number of links to a page is a measure of exactly that and nothing else - but do feel free make any assumptions you want about why those links are there. personally i don't tend to see googles search results as a reputability grading at all, and i wouldn't recommend that anyone does ("it's true, i found it on google!"). ultimately, if you care about the information that you publish, then you care about the metainformation. and yes, it's generally much easier to find web pages that have meaningful titles. regards, /m Martin Klang http://www.o-xml.org - the object-oriented XML programming language
|
PURCHASE STYLUS STUDIO ONLINE TODAY!Purchasing Stylus Studio from our online shop is Easy, Secure and Value Priced! Download The World's Best XML IDE!Accelerate XML development with our award-winning XML IDE - Download a free trial today! Subscribe in XML format
|