[XML-DEV Mailing List Archive Home] [By Thread] [By Date] [Recent Entries] [Reply To This Message] Re: Re: Divorcing Data Model and Syntax: was Re: he
"Keith W. Boone" wrote: > I think you miss the point of Jeni and Patrick's work. The Syntax of XML does > not allow for overlapping markup, something Jeni, Patrick, myself and others > have to deal with on a regular basis. LMNL and JITTs both provide mechanisms > to resolve those problems. I may be doubly obtuse, but I do not get what point it is that you think I have missed. My point in the posting to which you have responded is that, as this thread has developed, it has proven to be about processing models rather than about the larger relationship of syntax to semantics. Is that not effectively what you have said just above? > If these problems aren't the ones you need to deal with, it doesn't > invalidate their approach or the results. Forcing their problems into using an > XML 1.0 syntax ignores the requirements of their problem, and for what > purpose? I do, in fact, deal regularly with problems of concurrent markup which needs to range from the most rarefied reaches of critical commentary down to the character level in printed texts and to encompass lacunae, differences of scribal hands and other physical characteristics of the manuscript and epigraphical witnesses. It is true that I have developed an idiosyncratic processing mechanism which respects the well-formedness constraints of XML 1.0. It does this by segregating views of 'the text'--though the first premise is that there is not a single text--into separate XML documents, ranging in size from a single element to the entirety of a critical commentary. The values of attributes on the elements of those documents provide the vehicle for joining those documents into something very like relational database views, so that only one aspect of 'the text' is considered at a time, but that all of the marked-up evidence which bears upon that aspect of that instance is available. I frankly have no idea how this approach squares with current academic opinion on the processing of concurrent markup. It is a system which I have developed piecemeal as necessity required. The semantics which it elaborates are obviously directly dependent on the details of the processing, which was my point in the earlier posting about semantics being elaborated from syntax by processing, and seems to me to be no different from your point above that your solutions, Jeni's and Patrick's succeed by the nature of the processing which they apply, not by the conformity of their inputs to the well-formedness constraints of XML 1.0. It happens that my 'serial form' does in fact respect those well-formedness constraints, but that is beside the point that the semantics come out of the execution of process. > To fit the work into something that can be used in XML? Why is that > necessary? It is convenient, but by no means necessary, to use off-the-shelf XML parsers and other standard tools. > They've both extended the notion of markup in such a way as to provide for a > missing capability in XML, the ability to use overlapping markup. Sure, you > could develop an XML syntax that models their data in such a way as to provide > for an XML serialization of it, but that ignores the human requirements of > being able to easily interpret and edit that markup [remember, none of this is > important without humans... ;-)] As briefly described above, I have in fact developed such a 'serialization', which does respect the well-formedness constraints of XML 1.0. I have specifically done what I have done in the attempt to insure that both the XML syntactic input and the semantics elaborated by the execution of process are immediately comprehensible to humans. Respectfully, Walter Perry
|
PURCHASE STYLUS STUDIO ONLINE TODAY!Purchasing Stylus Studio from our online shop is Easy, Secure and Value Priced! Download The World's Best XML IDE!Accelerate XML development with our award-winning XML IDE - Download a free trial today! Subscribe in XML format
|