-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Differentiate scenario level vs examples level parallelism #175
Comments
ps - i changed the issue from a question to a feature request. after poking around, i think i got a working POC but im having issues with the json/html report. i'm also open to changing hoping for your response and thanks again for all the great work you're putting to keep this library running :) |
Hi @iamkenos, as discussed before, this is a great feature to implement into the testing framework. |
Hi @iamkenos, Thanks for getting progress on this. In order to keep in that way, I did a POC to ensure we can run scenario outline examples in serial mode when parallel scheme is set by "scenario", and the result is the following one: The changes are relatively small, so no big refactoring is needed. Is it possible you incorporate this approach to the PR you are creating? Just remember that I only edited the parallel execution by scenario to do not run the examples in parallel, in a way that selecting the parallel execution by "examples" would be pretty straightforward (not grouping the scenario outlines as in line runner.py:584 in the branch I provided) Please, let me know if you have any doubts and we can continue making the necessary improvements. Thanks a lot! |
Thanks @hrcorval for the quick response and for providing a poc branch! I totally agree with your points specially on not using scenario names. I might be occupied for the next couple of days at work but will surely pick this up again and have a look at your implem afterwards. Cheers! |
Sure, don't hesitate to contact me if you have questions. Thanks! |
hi @hrcorval it's been a while. just giving an upadate. i've seen and tested your branch and changes. it works perfectly and is the implem is straighforward too. please give me some time to finish this up with the examples scheme, just rather occupied rn on the weekdays at work. i'll try and update the pull request before next week. really appreciate your support and effort! 👏🏻 |
Hi @iamkenos. Really glad to see you are getting good progress on this, and see the changes are working. Please, let me know about anything you might need. Keeping an eye on the PR you would be providing. |
hi @hrcorval , i've updated the pull request. there are still some things i couldnt figure out 😓 . let's continue the comms on the PR. would appreciate if you can take a look. thank you! also - would it be possible to publish an alpha release of |
Hi @iamkenos, thanks for all the improvements done on the PR. The following is the package created based on that code: In the meantime, I'll analyze and debug the issue you are reporting when executing at examples level. |
Context:
I recently raised this question: #157.
Basically, I want a way to run scenarios in parallel on
Scenario
level only and not onExample
level.Our team really needs this feature.
I'm open to use a workaround if there is any, otherwise, I'm more than happy to contribute to this change if you can provide guidance.
Proposed Approach
I'm thinking we can add a scheme , something like
--parallel-scheme=examples
(which will be the default scheme) where itwill run in parallel on examples level (current behavior)
and refactor
--parallel-scheme=scenario
so it runs parallel on scenario level.POC
I have created this branch from the tip of
4.0.9
, which i think kinda does the trick (execute using scenario names) but im not too sure how to get the json skeleton so the report renders properly.see: behavex/outputs/report_html.py:592
Thanks in advance!
The text was updated successfully, but these errors were encountered: