[XSL-LIST Mailing List Archive Home] [By Thread] [By Date]

Re: [xsl] xslt test automation


Subject: Re: [xsl] xslt test automation
From: Philip Fearon <pgfearo@xxxxxxxxxxxxxx>
Date: Tue, 30 Nov 2010 21:39:38 +0000

On Tue, Nov 30, 2010 at 6:55 PM, Dave Pawson <davep@xxxxxxxxxxxxx> wrote:
> On Tue, 30 Nov 2010 17:14:16 +0000

>> >> (2) the XML summarising the output
>> > Common to any testing?
>> No, just to XSLT testing/
>
> My point was that this summary is common to any
> testing, XSLT or otherwise?
>
Ok, so I'm sure there are commonalities, I'm just trying to control
scope that's all.
>
>> > reference to test definition(s)
>> > Test count run
>> > Tests passed, failed, not run.
>> >
>> > Oddity.
>> >  templates [matched/named]used
>> >  Templates [matched/named] not used.
>> >  Input elements not matched (??? If applicable)
>> All included in the output summary. Unmatched templates appear as
>> errors
>
> Ah, you're saying what you already do? I understand.
> It may not be an error though? For this run I may
> not need the output of a particular template?
> Report it, but let the user define it as an error or otherwise?
>
At the moment this is low-level, so if the processor reports an error,
for example on being unable to locate a named template, that's how its
reported.
>
>
>> >  functions used.
>> >  XML comparison of expected/actual from each template.... Possible?
>> >      Not sure. How to encapsulate depth? XMLdiff definitely needed.
>> Agree XMLdiff would be invaluable for regression testing, but this
>> isn't the only kind of test.
>
> You didn't classify the 'kinds' of tests, hence IMHO it is
> needed in a GP XSLT test setup?
>
Agree this is a basic need, I will include an XSLT implementation for
XMLdiff, at the moment its left to the customer.
>

> "The requirement is that all output data not normally accessible to
>  XSLT-based frameworks is collated into a single XML resource,"
>
> As a requirement of what testing you want to do I find that a bit
> on the vague side?

Yes, very vague, but its a start, providing a 'hook' for existing
frameworks to exploit.
>
> Given a requirement to 'produce a transform to take Schema X instance
> and produce something to Schema Y', what requirement would you
> put on the testing of the work done by the XSLT author?
>
Schemas can represent all kind of rules, so this can be part of the
test, currently the validation report uses a variation of the schema
used for the transformation report. Eventually it should be possible
to combine these.
In the current implementation, for XSD, a tester normally selects an
XSD resource folder, they're presented with the top-level files
associated for each target namespace, these are then assigned to the
input or output schema validator. All input and output files are also
scanned for schema-location hints and an attempt is made to resolve
these so they're presented in a list ready for use. Validation is
performed on a separate batch process to the transform, so this is
semi-automatic at best.

> Some easy ones, some not so easy. "Has he/she done their job/what was
> asked of them" is part of the question to answer.
>
Yes, traceability to requirements is critical to any acceptance
testing. For smaller projects I can see some benefit in using
XSLT/EXPath to maintain a spreadsheet/database, keeping links of
requirements, satisfaction arguments, the tests, and the test results.

> I'm sure there's lots more than that.
>
> Far easier to test XSLT 2.0 functions against expected values etc.
>
Yes, testing with extension functions with side-effects, like EXPath,
is also a consideration. Only a .NET EXPath ZIP (first draft)
implementation is currently catered for internally, but more general
support could follow.
> How much can you mess with the XSLT being tested before someone cries
> foul?
Ultimately it's down to the tester to make a judgement on how far they
want to go with this, there will be limits

Phil Fearon
http://qutoric.com


Phil Fearon
http://qutoric.com


Current Thread
Keywords