|
[XML-DEV Mailing List Archive Home] [By Thread] [By Date] [Recent Entries] [Reply To This Message] RE: Are we losing out because of grammars? (Re: Schema ambiguitydetecti
Precisely. What that says to me is that we have to know the bounds of the system to determine the requirements for the means of valdiation, which is why I suggest a broadcast model at the high end. So we are back to Shannon's H and K values. We have to determine a measure for the granularity of the unit of information (K) and the means to determine the uncertainty (H) for any value of K. Sounds theoretical but, IMO, is the practical problem in a nutshell. Len http://www.mp3.com/LenBullard Ekam sat.h, Vipraah bahudhaa vadanti. Daamyata. Datta. Dayadhvam.h -----Original Message----- From: Rick Jelliffe [mailto:ricko@a...] I think there are two kinds of schemas and therefore schema languages: one tries to express what is true of all data of that type at all times (e.g. for storage and 80/20 requirements) and another tries to express the things that make that particular information at that particular time and context different from other data of the same type. One tries to abstract away invariants, the other tries to find abstractions to express these variations. The first kind is a map, the second kind is a route. The first kind is good for automatically generating interfaces and for coarse validation, the second kind is what is required for data-entry and debugging all data at all. (As for the status quo, I don't believe XML Schemas and DTDs pay much or any attention to this second kind of schema: maybe TREX and RELAX do a little bit and I hope Schematron is closer to the other end of the spectrum. )
|
PURCHASE STYLUS STUDIO ONLINE TODAY!Purchasing Stylus Studio from our online shop is Easy, Secure and Value Priced! Download The World's Best XML IDE!Accelerate XML development with our award-winning XML IDE - Download a free trial today! Subscribe in XML format
|
|||||||||

Cart








