Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic housekeeping #16

Open
rylnd opened this issue Feb 11, 2013 · 7 comments
Open

Automatic housekeeping #16

rylnd opened this issue Feb 11, 2013 · 7 comments

Comments

@rylnd
Copy link
Owner

rylnd commented Feb 11, 2013

Similar to #15, it would be nice to keep track of what's done inside of an individual test, and rollback those modifications at the end of the test. I'm thinking something similar to rails console --sandbox, but we'd only have to keep track of modifications made to things we provide helpers for (stubbed commands, backup files, etc.)

@thinkerbot
Copy link

So a bunch of your open issues seem to revolve around this, so I'm just picking here to make some comments because it seems most relevant.

@rylnd, in #34 you mention the limitations of the current syntax and I think you are up against them. At present shpec allows carryover between examples and between files (see here) because all of the files are sourced into shpec which then runs linearly through the examples. That means they all share the same context.

One way to combat the carryover is to source the files within a subshell. That way they will still have the functions you define earlier, but changes to pwd and variables will not leak out.

time for file in $files; do
  (. "$file")
done

However, doing this makes counting failures a bit tricky. You can't increment a variable because the changes will not be seen outside the subshell! You can count the number of files that pass within the for loop, but I'm not sure how you would count the examples short of reading the output stream. Moreover, I don't think shpec will be able to do the same trick for examples; they're executed inline so at present the user would have to manually wrap each in a subshell. Or you do some preprocessing.

It's not that there's anything wrong with shpec itself. The carryover is just the way shell works when you do it this way.

I've got a shell test script of my own -- ts. It takes a different approach and may give you some ideas for how you might tackle carryover. ts defines each test in a function then makes the running script call each function within a subshell. That way each test is isolated. Success/failure is communicated by exit status to the the running context, which counts them.

@locochris
Copy link
Collaborator

What about passing them back to the parent using a fifo?

[ -e /tmp/shpec.fifo ] || mkfifo /tmp/shpec.fifo
time for file in $files; do
  (. "$file"; echo $failures $examples > /tmp/fifo) &
  read failures examples </tmp/fifo
done

@locochris
Copy link
Collaborator

That way the tests can still output to stdout (something I'm relying on in syscheck a system checker that wraps shpec) and communicate success and failure.

@thinkerbot
Copy link

@locochris that sounds fine, I think something like that is needed. Since variables won't work you're basically left with streams of some sort. I'm curious - any reason why you're thinking of a fifo instead of a file? And why background it?

@rylnd
Copy link
Owner Author

rylnd commented Mar 16, 2013

@thinkerbot thanks for the help. @locochris as well. I'll have time to give this some proper attention next week, but in the meantime I wanted to make sure you didn't think you were being ignored.

@rylnd
Copy link
Owner Author

rylnd commented Nov 21, 2014

So I don't have an exact plan yet, but I think the 'running tests in a subshell,' while require a significant refactor, is definitely going to be a lot easier than trying to keep track of (and negating) anything state changes within an individual test.

I'm hoping to figure this out in the next few days.

@twe4ked
Copy link
Contributor

twe4ked commented Nov 21, 2014

Cool! We actually ended up switching our shell testing to RSpec in the end. I'll keep an eye on this project for future stuff though :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants