[Home] [By Thread] [By Date] [Recent Entries]
And now I seem to recall that MS had some condition on use that people could not publish benchmarks of MSXML, which makes that part of my comment unfair. I think some people published a benchmark with a mussing name for that product. But the memory is not strong... As for Xerces/Java performance, this paper https://www.researchgate.net/publication/220787871_The_XMLBench_Project_Comparison_of_Fast_Multi-platform_XML_libraries suggests that, at that time, only the Oracle libraries were worse, of the major tools. (On the other hand, there are some benchmarks with different results, and it often depends on the kind of markup your benchmark has, and for XSLT the parsing is usually not the major contributor to performance anyway.) So let me rephrase: in order make to a legimate comparison of a new parse method, it is necessary not just to pick a couple of parsers you consider typical or median or popular. It is better to show the best and worst two plus whatever parser is most often found in other papers: if you have a four-fold improvement over a mediocre but popular parser, and it turns out there are several other parsers that offer a ten-fold improvement, it changes what may be concluded by the reader about your method. Cheers Rick On Fri, 23 Jul. 2021, 14:21 Mukul Gandhi, <mukulg@s...> wrote:
[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index] |

Cart



