[Home] [By Thread] [By Date] [Recent Entries]

  • To: 'Jeff Lowery' <jlowery@s...>, xml-dev@l...
  • Subject: RE: Poll (was: Seeking advice on handling large industry-standard XML data models)
  • From: "Bullard, Claude L (Len)" <clbullar@i...>
  • Date: Tue, 14 Jan 2003 16:09:30 -0600

The fast and usually correct answer is:  it depends. 
And that ain't no help.  Off the top:

DOM - usually the worst solution, IMO.  It leads 
to an excess of exposed scripting, lots of memory 
overhead, etc.  I'm not sure when DOM is the 
right solution.  I'd like to hear opinions on that.

Data Binding - expensive.  Fast.  Proprietary 
in many cases (could lead to lots of one-off 
code) but well understood and keeps as much code 
hidden as possible.

XSLT - the best of the worst.  One assumes a 
processing framework exists, is adequate, is 
tested, is fielded ubiquitously.  We had transforms 
before XSLT, but we didn't have HTML.  HTML is 
still the binding force of the web universe.

I'm not sure what pipelining adds except 
more stuff.  When does one need it?  That 
is another "opinions welcome' topic.

len


From: Jeff Lowery [mailto:jlowery@s...]

> See this insightful article from Sean McGrath:
> 
> http://www.propylon.com/news/ctoarticles/Zen_and_the_Art_of_Mo
> torcycle_Manuals_20020822.html

Yeah, I read this.  In this case, though, the markup decisions have been
done.  The question is how best to process the markup. And there is no one
answer, but maybe there's a trend.

I can see Sean's point about XSLT not being hte One True Way.  I tried
number 3 already (but in reverse): if you're docs are in any way similar,
then you have one helluva XSLT script to write.  On the other hand, what's
better?  DOM? Data binding?  

I see pipelining as an enabling technology rather than a solution in itself,
but maybe I have blinkers on.

-----------
> 3. A transform (from proprietary serialized format to XML interchange
> format)

Site Map | Privacy Policy | Terms of Use | Trademarks
Free Stylus Studio XML Training:
W3C Member