Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Do Not Merge] New GUI Test Tools #447

Draft
wants to merge 18 commits into
base: main
Choose a base branch
from

Conversation

corranwebster
Copy link
Contributor

This is a proposed new version of the GUI test assistant that is more stand-alone and backend-independent. The biggest changes are to rely more on pyface.timer (which in turn implies some changes to that API) and to push a lot of the backend dependent behaviour onto the GUI class (in hopefully a sensible way).

The API is tested out on the Window class.

Tests need to be written.

@codecov-io
Copy link

codecov-io commented Sep 16, 2019

Codecov Report

Merging #447 into master will increase coverage by 0.36%.
The diff coverage is 68.11%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #447      +/-   ##
==========================================
+ Coverage   36.63%   36.99%   +0.36%     
==========================================
  Files         481      487       +6     
  Lines       26424    26722     +298     
  Branches     3917     3953      +36     
==========================================
+ Hits         9680     9886     +206     
- Misses      16321    16394      +73     
- Partials      423      442      +19
Impacted Files Coverage Δ
pyface/testing/util.py 0% <0%> (ø)
pyface/i_gui.py 80.55% <100%> (+2.43%) ⬆️
pyface/qt/__init__.py 46.93% <100%> (+2.25%) ⬆️
pyface/i_window.py 82.35% <100%> (-15.65%) ⬇️
pyface/ui/qt4/confirmation_dialog.py 100% <100%> (ø) ⬆️
pyface/toolkit_utils.py 100% <100%> (ø)
pyface/ui/wx/window.py 90.24% <20%> (+10.49%) ⬆️
pyface/ui/qt4/window.py 72.8% <57.14%> (-1.07%) ⬇️
pyface/ui/wx/timer/timer.py 87.5% <62.5%> (+24.34%) ⬆️
pyface/ui/wx/gui.py 66.66% <66.66%> (+18.39%) ⬆️
... and 24 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 42bdac6...6d23d86. Read the comment docs.

@corranwebster
Copy link
Contributor Author

@mdickinson could you please have a look at the proposed API (particularly for GUITestCase and EventLoopHelper). The main changes are:

  • it is a subclass of TestCase rather than a mixin (I could be convinced either way)
  • most toolkit-dependent event loop stuff is pushed to the GUI class
  • it uses pyface.timers rather than working at the toolkit level for timers
  • no misleading context managers

There are some additional changes that I'd be interested in making (such as renaming event_loop_... methods to just loop_...), and there is currently no replacement for the ModalDialogTester (and probably won't be in this PR). Also there are errors on Windows that need to be wrangled, but I'm not going to invest the effort before getting some feedback that this is a better solution than our current one.

finally:
self.gui.quit_on_last_window_close = flag

def event_loop(self, repeat=1, allow_user_events=True):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd really like to include some advice in the docstring that if you find yourself needing to use a repeat of greater than 1, you should consider whether you can use event_loop_until_condition instead.

repeat : positive int
The number of times to call process events. Default is 1.
allow_user_events : bool
Whether to process user-generated events.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps add "for example keyboard and mouse events" to clarify what's meant by "user-generated events" in this context?

----------
condition : callable
A callable to determine if the stop criteria have been met. This
should accept no arguments.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should also be explicit about the expected return type of the callable.

should accept no arguments.

timeout : float
Number of seconds to run the event loop in the case that the trait
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copypasta here: "the trait change does not occur".

)
repeat_timer.on_trait_change(self._on_stop, 'active')

try:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to engineer a test for the corner case where the timer timeout expires before the event loop is started? (Perhaps by patching start_event_loop to sleep for some time first.) I'm reasonably convinced that the current logic is fine (_on_stop must be called from the main thread, so it can't possibly be called until the event loop starts), but it would be good to have a test to prevent future regressions.

What happens if the timeout is zero? Is that a use-case we want to support, or should the docstring specify that timeout should be positive?

Parameters
----------
timeout: float, optional, keyword only
Number of seconds to run the event loop. Default value is 10.0.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The repeat parameter isn't documented.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also document the conditions under which ConditionTimeoutError is raised.

I'm not really sure I understand when this method would be used, and when it should be considered that an error has occurred.


Parameters
----------
timeout: float, optional, keyword only
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"keyword only" doesn't seem to actually be true here. Presumably the intent is to add that extra "*" to the signature after Python 2 support has been dropped? (If so, should there be an issue open for that?)

A callable to determine if the stop criteria have been met. This
should accept no arguments.

timeout : float
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this also be documented as "keyword only", to match event_loop_with_timeout?

self.addCleanup(self._gc_collect)
self.addCleanup(self.event_loop_helper.event_loop, 5, False)

super(GuiTestCase, self).setUp()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this is useful. It doesn't do anything as-is, so presumably the intent is that in a multiple inheritance situation (class MyTestCase(GuiTestCase, OtherTestCaseSubclass): ...), MyTestCase.setUp could do a super(MyTestCase, self).setUp() and have both GuiTestCase.setUp and OtherTestCaseSubclass.setUp being called as a result. But that likely won't work, because of the UnittestTools mixin (which doesn't use super), so I think this line just ends up misleading people into thinking that the super cooperative multiple inheritance pattern will work.

If we want this to work, there may be a case for inlining the UnittestTools machinery instead of subclassing UnittestTools. Otherwise, I'd suggest just removing this line (and the corresponding line in the tearDown).

Copy link
Member

@mdickinson mdickinson Sep 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But that likely won't work

Okay, thinking about this, I'm not sure any more. It may work after all. The UnittestTools class will just be effectively invisible to the super machinery, because it doesn't implement setUp and tearDown at all.

Can we have a test for multiple inheritance cases that verifies that both the parent class setUp methods get called as expected (regardless of the ordering of the parents)? One way that this could potentially break is if some future version of UnittestTools does decide to implement setUp, so it would be useful to have a test that alerts us to the breakage.

# Some top-level widgets may only be present due to cyclic garbage not
# having been collected; force a garbage collection before we decide to
# close windows. This may need several rounds.
for _ in range(10):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be worth adding an "else" clause here that complains loudly (perhaps with a warnings.warn warning of some kind) if we reach it; if 10 rounds of garbage collection still haven't cleaned up everything, then something's likely seriously wrong (e.g., a separate thread that's busily creating new cycles while we're trying to clean them up) and worthy of investigation.

""" If 'stream' is None, provide a temporary handle to /dev/null. """

if stream is None:
out = open(os.devnull, 'w')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For Python 3, I suggest adding an explicit encoding here. Seems nitpicky, but there's at least a possibility that the system encoding doesn't allow arbitrary Unicode to be written.

@mdickinson
Copy link
Member

mdickinson commented Sep 17, 2019

I left a few comments on the code, mostly (but not entirely) at nitpick level.

  • it is a subclass of TestCase rather than a mixin (I could be convinced either way)

Me too; there are tradeoffs either way, but overall I think I'm happier with the subclass approach. If it makes GuiTestCase usable with the usual super pattern in a multiple inheritance situation, so much the better.

Overall the API looks good to me, but I don't think I understand what EventLoopHelper.event_loop_with_timeout and GuiTestCase.event_loop_with_timeout are for. From just the name and the signature, I'd assume that these are for cases where you want to say "I've set things up so that the event loop will only run for a short finite time. Now either go and run the event loop until it stops naturally, or if it hasn't stopped after <timeout> seconds raise a timeout exception." But that doesn't seem to be what the code is doing.

Will there be unit tests for the EventLoopHelper and the GUITestCase?


(Edited to change EventLoop to EventLoopHelper and fix other minor typos.)

@mdickinson
Copy link
Member

N.B. I didn't really look at the parts of this PR that aren't directly related to the GuiTestCase.

@corranwebster
Copy link
Contributor Author

Yes, event_loop_with_timeout was problematic, and the docstring got out of sync with the final version. The original intent was as in the current GuiTestAssistant: run the actual event loop at least repeat times with a timeout if it takes too long to repeat that many times.

I then went to the start which the current docstring suggests which was "run for timeout seconds" no matter what due to issues getting it to work on WxPython; but it turned out that the problem was in fact that under WxPython you can't start the event loop until there is at least one window in existence; so I reverted to the original intent and used a different approach for the use case which was causing problems.

But given that we are using it in just one place, it maybe needs a re-think or removal (even the old version is not used much). After all, without the repeat condition, it is just event_loop_with_condition where the condition always returns False.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants