Jump to content

Module talk:UnitTests

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Aidan9382 (talk | contribs) at 09:37, 28 January 2023 (replies (I can give these a shot)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Test group order

At the moment, the test groups appear on the page in random order. Is there any way of getting them to appear in the order that they are defined? — Mr. Stradivarius ♪ talk ♪ 15:30, 3 April 2013 (UTC)[reply]

As far as I know it's not possible to determine the order in which tests were defined, unless perhaps the test framework reads the module page (Lua source) itself. It's also not clear this is always the desired behaviour. There are a couple other approaches: always using alphabetical order, and naming your tests alphabetically in order using numbers; or giving you an option to explicitly specify the order. Dcoetzee 17:09, 3 April 2013 (UTC)[reply]
Putting them in alphabetical order would be a good way around the problem. The test cases on Module talk:Delink/testcases don't appear in alphabetical order, though - it's just random, as far as I can tell. I think it's probably the natural table order that pairs() found, although I don't know the code well enough to try and change it. (And I might cause some disruption to other people's testing if I do.) — Mr. Stradivarius ♪ talk ♪ 19:15, 3 April 2013 (UTC)[reply]

Don't treat the result off a call as Wikitext

Hi UnitTesters. I'm failing to find the proper format for the nesting tests at Module:Delink/testcases function test_nesting. How do I do that? What I want is the expanded result to be compared to the non-expanded expected cases. Martijn Hoekstra (talk) 15:50, 3 April 2013 (UTC)[reply]

I've fixed the problem using the nowiki option. The option isn't documented yet, so I might go through and add it when I have a second. — Mr. Stradivarius ♪ talk ♪ 19:10, 3 April 2013 (UTC)[reply]
Thanks for adding nowiki option. It was very useful at commons:Module_talk:Coordinates/testcases. --Jarekt (talk) 18:19, 13 December 2013 (UTC)[reply]

Alternative implementation

This module has several shortcomings:

  • logic and presentation are mixed together, making it impossible to present the tests in different formats
  • as a consequence, it is not possible to run tests in the debug console (which is quite convenient when you need to change the tests)
  • there is line number for failed assertions
  • errors thrown by tested code are not caught

I created an alternative test module in hu.wikipedia and would welcome any comments or feature requests. The module is at hu:Modul:Homokozó/Tgr/ScribuntoUnit, a usage example is at hu:Modul:Homokozó/Tgr/Set/tests (the documentation is in Hungarian, but comments and variable names are in English, and the code follows xUnit conventions, so understanding it shouldn't be a problem). It throws exceptions from failed assertions, builds a result table based on which tests throw exception/error, and can then present the results in any way; I believe the separation of actual testing and display code makes it more maintainable and reusable. --Tgr (talk) 15:17, 25 May 2013 (UTC)[reply]

@Tgr: didn't have time to fully read it, would it be much work on the test cases to convert them to your new module ? —TheDJ (talkcontribs) 20:53, 4 June 2013 (UTC)[reply]
I created Module:UnitTests/sandbox which right now only mixes logic and presentation in three places. I created Module talk:Citation/CS1/testcases2 to make sure it still works and for comparison purposes. testcases2 uses 6.91 MB of memory and takes 3.964 seconds to process tests compared to testcases which uses 7.23 MB of memory and takes 4.933 seconds to process tests. Maybe with full separation of the logic and presentation the memory footprint and processing time can be decreased further. I think another approach to unit testing would be better, but that will require rewriting current tests, which like in the case of the citation module, could take a bit of effort to do. --darklama 15:11, 5 June 2013 (UTC)[reply]

Test a string contains expected text

Please see Module talk:ScribuntoUnit#Test a string contains expected text for an enhancement request. --Derbeth talk 21:41, 1 January 2014 (UTC)[reply]

Compare template vs. module

When comparing template vs. module, going with "==" seems wrong. The template and module may differ in e.g. number and types of HTML whitespaces which don't matter, or HTML representations (&nbsp, etc..). I think this is a good sample: Module:Sandbox/Dts/testcases. At first stage, I'd suggest making first_difference a member function. If this member function returns nil then strings are considered identical. This will allow tests to define their own method. Second stage will be to offer some pre-created options.. Tsahee (talk) 20:44, 19 January 2014 (UTC)[reply]

Add a value to nowiki to show the wikitext only if the actual result does not contain a script error

I suggest making the nowiki option support a string like "if no errors" as a value that would make mw.text.nowiki not be applied on the actual result if a script error can be detected in it. If there's a script error, the wikitext is of no use (it will be the same regardless of the error), while the rendered result can be clicked on to show the error message, making it easier to fix. --Mark Otaris (talk) 16:42, 13 October 2015 (UTC)[reply]

{{#invoke:UnitTests/testcases/frame | _test}} give different result

Why {{#invoke:UnitTests/testcases/frame | _test}} give different result for direct #invoke vs. invoke via UnitTests module? --Ans (talk) 13:58, 11 October 2017 (UTC)[reply]

templatestyles

Module:Citation/CS1 supports some 25 live templates and Module:Citation/CS1/sandbox supports an equal number of sandbox templates. We could have added WP:TemplateStyles markup (<templatestyles src="<name>/styles.css" />) 25× to the live templates and 25× to the sandboxen but that just seemed dumb so each of the modules concatenate template styles to the end of the cs1|2 template rendering using this (where styles is the name of the appropriate css page):

frame:extensionTag ('templatestyles', '', {src=styles})})

and that works great.

Except in Module:Citation/CS1/testcases.

Where every test fails. 318 failures. There are differences between the live and sandbox modules but not that many.

Because of TemplateStyles. Why? Because TemplateStyles inserts a stripmarker at the end of every cs1|2 template rendering and each stripmarker has a unique id number. So, this always fails:

{{#ifeq:{{cite book |title=Title}}|{{cite book |title=Title}}|ok|FAIL}}
FAIL

even though the two {{cite book}} templates are identically written. Here are two transclusions of identical templates; note the stripmakers at the ends:

'"`UNIQ--templatestyles-00000006-QINU`"'<cite class="citation book cs1"></cite> <span class="cs1-visible-error citation-comment"><code class="cs1-code">{{[[Template:cite book|cite book]]}}</code>: </span><span class="cs1-visible-error citation-comment">Empty citation ([[Help:CS1 errors#empty_citation|help]])</span>
'"`UNIQ--templatestyles-00000008-QINU`"'<cite class="citation book cs1"></cite> <span class="cs1-visible-error citation-comment"><code class="cs1-code">{{[[Template:cite book|cite book]]}}</code>: </span><span class="cs1-visible-error citation-comment">Empty citation ([[Help:CS1 errors#empty_citation|help]])</span>

To get round this, I have hacked Module:UnitTests function preprocess_equals_preprocess() (called only by preprocess_equals_preprocess_many()) to accept a new option.templatestyles. When that option is set to true, the code looks at the content of expected and extracts the templatestyles stripmarker identifier (an 8-digit hex number). It then overwrites the templatestyles stripmarker identifier in actual so that they both have the same identifier. Only then does preprocess_equals_preprocess() compare actual against expected.

If you are looking to text changes in ~/sandbox/styles.css compared to ~/styles.css, this change won't help you – and Module:UnitTest is probably the wrong tool anyway because stripmarkers are replaced with their actual content after this module has run.

I suppose that there might be reasons that we might want to expand the capabilities of this functionality though I'm not sure just what those reasons might be. For example these possibilities:

none – remove templatestyles stripmarkers from both actual and expected; no styling applied to the renderings
actual – replace templatestyles stripmarker identifier in expected with the templatestyles stripmarker identifier from actual; both use ~/sandbox/styles.css for styling

Perhaps there are others.

Trappist the monk (talk) 19:27, 28 March 2019 (UTC)[reply]

Good work, life is getting complicated! Johnuniq (talk) 23:14, 28 March 2019 (UTC)[reply]

Table of Contents

What about generating a table of contents? Trigenibinion (talk) 14:54, 12 March 2021 (UTC)[reply]

Conflicts with 'Module:No globals'

A handful of functions in this module are not marked 'local', but could and (arguably) should be. --86.143.105.15 (talk) 10:36, 27 January 2022 (UTC)[reply]

Tests not failing when they should

I'm creating testcases for Module:GetShortDescription and Module:AnnotatedLink and due to my own derping, left some copy-pasta typos which should have caused a series of tests to fail, but they did not. I fixed the typos but purposefully altered one test to fail and it sailed through running {{#invoke:AnnotatedLink/testcases|run_tests}}. Am I doing something wrong, or is there a problem with this module? Fred Gandt · talk · contribs 09:42, 27 January 2023 (UTC)[reply]

"Test methods like test_hello above must begin with 'test'". I knew that. Fred Gandt · talk · contribs 09:45, 27 January 2023 (UTC)[reply]
Might I suggest not outputting "All tests passed." when no tests have been run? Fred Gandt · talk · contribs 11:18, 27 January 2023 (UTC)[reply]
 Done. I've also included the total amount of tests ran in general. Aidan9382 (talk) 11:49, 27 January 2023 (UTC)[reply]
Very nifty; thank you 😊 Fred Gandt · talk · contribs 13:01, 27 January 2023 (UTC)[reply]

Allow nowiki option to have three states

Currently it appears that nowiki is a boolean option; could we have a third option to display both the <nowiki>...</nowiki> and parsed results? We could of course double all tests where this might be desirable, but 1) they might not end up anywhere near each other, and 2) it'd be inefficient. Suggest: {nowiki = 2} (semantically nice and easy math) for the third state. Fred Gandt · talk · contribs 20:36, 27 January 2023 (UTC)[reply]

For the sake of ease in handling the code, and the fact I'd rather keep those options as just "truthy" checks instead of exact == checks (the only reason its =1 in the doc is probably because its shorter than typing =true), does something like a seperate nowikiplus or combined option sound better? I'll probably have to standardise the module a little to make adding this not mean pasting the same code in 5 different functions, but it should be doable (I'll just have to think about how to lay it out in the output). Aidan9382 (talk) 09:37, 28 January 2023 (UTC)[reply]

Present failed tests together at the top

Another suggestion: present all failed tests at the top of the results. This might be achieved multiple ways and someone with greater familiarity with the code might be best suited to decide exact what approach is best. As a user; seeing two sections – the uppermost for failed and the next for passed tests – would be ideal. Section depth should be unimportant unless the results are substed (but who would do that?); lvl 3 sections should be fine since the whole lot can be placed under a standard lvl 2 section for posterity. Diverting the results as their condition becomes known to the appropriate section should be trivial (easy for me to say right? I'm a tad busy right now but will tackle it myself if necessary). Fred Gandt · talk · contribs 09:24, 28 January 2023 (UTC)[reply]

I'll try theory this in the sandbox too. Aidan9382 (talk) 09:37, 28 January 2023 (UTC)[reply]