This is the mail archive of the xsl-list@mulberrytech.com mailing list .


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

speed questions


Hi there,

I have a xml dictionary file with about 95000 entries, 20 Megabytes in size. Due to its nature I need to do searching amongst different criterias (languages, substring matching, ...) and I intend to use XSL for it.
Now - judging from my latest experiments - I wonder if xml/xsl is a good choice for implementing such a thing, since - given my present understanding of xml/xsl - each time I invoke the xsl(t)-processor the file is read (flatly) again. And since this file is so big I wonder if this is efficient? I also thought about generating separate, smaller xml files which hold additional statistical data that I could preprocess with another stylesheet in order to save some time, but I am not sure if this would be useful.
What do you know about this?
Probably many of you work with much larger databases, so what would you do? An online reference would also be fine :-)
I also tried to get some speed results on the various xslt-processors, but the latest I found was done by www.xml.com somewhere back in 2001, and I believe that there have been many changes - and improvements - to the various xsl(t)-processors. Do you know of any more recent tests? So far I have only tried the msxml and saxon, favouring the latter since it's platform independent due to its Java nature.

If someone could enlighten me on that, it would be very nice :-) I am more experienced using SQL with databases such as db2, and hence I apologize that some of my questions may sound somewhat awkward and strange.

Greetings and thanks in advance,

Juggy


XSL-List info and archive: http://www.mulberrytech.com/xsl/xsl-list


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]