|
[XSL-LIST Mailing List Archive Home] [By Thread] [By Date] [Recent Entries] [Reply To This Message] Re: Efficently transposing tokenized data
Michael Kay schrieb:
(2) Do a preprocessing pass to compute a sequence of NxM strings in one big sequence, then operate by indexing into this big sequence.
<xsl:stylesheet version="2.0"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="MultiLine">
<xsl:variable name="data-chain" as="xs:string*"
select="for $l in Line return tokenize( $l/@data, '\s')"/>
<xsl:variable name="samples" select="@samples cast as xs:integer"/>
<table>
<tr>
<xsl:for-each select="Line">
<th><xsl:value-of select="@title"/></th>
</xsl:for-each>
</tr>
<xsl:for-each-group select="$data-chain"
group-by="position() mod $samples">
<tr>
<xsl:for-each select="current-group()">
<td><xsl:value-of select="."/></td>
</xsl:for-each>
</tr>
</xsl:for-each-group>
</table>
</xsl:template>
</xsl:stylesheet>Assuming a large input, your approach looks more efficient to me as it avoids grouping where indexing into the list does the job. Now I guess from previous answers on this list given to similar questions that this is all implementation-defined. In spite of this, I'm asking whether that is all that can be said here or whether there is a rationale here to favor indexing over grouping when (a) processing time or (b) memory consumption are important? Michael Ludwig
|
PURCHASE STYLUS STUDIO ONLINE TODAY!Purchasing Stylus Studio from our online shop is Easy, Secure and Value Priced! Download The World's Best XML IDE!Accelerate XML development with our award-winning XML IDE - Download a free trial today! Subscribe in XML format
|

Cart








