[Home] [By Thread] [By Date] [Recent Entries]
Hi Dimitre,
Sorry for being unclear, I indeed found the concept a bit hard to explain, let me try it easier and in a different wording: IN SHORT: Take any input file, filter out several data (filter on filter), tokenize it, serialize the result as XML. LONGER: 1. Take an input file, containing some structured data, example CSV (one line): Field 1, "quoted field", /* comments, ignored */ "field with ""quotes"" in it", unquoted field // end of line comment 2. By applying a chain of filters, this (and many other) structured formats can be changed into a node set. In this example: a) replace all comments with nothing b) replace double quotes with special char DQUOT c) replace comma's between quotes with special char COMMA d) remove all quotes e) tokenize the string by normal comma f) on serialization, replace special chars DQUOT and COMMA with their normal counterparts 3. The order of a-f is very important. It is defined in an xsl:variable like in the original example. If you take this to a higher level, you got a very powerful structured-text-to-xml extractor. Which is what I am after. You say that you can define the order of execution, can you shed some more light on how to do so? Cheers, Abel Dimitre Novatchev wrote: Hi Abel,
|

Cart



