-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make client execution more flexible #236
Comments
I tend to close this one in favor of #251. Because if you have a non-uniform request distribution anyway I doubt that you still care about what happens at the first request which is usually during the warmup phase (and after that clients will not issue requests at exactly the same time in the general case due to the non-uniform distribution). |
Closing in favor of #251 as it will allow users to specify arbitrarily complex scheduling strategies. |
I still think this is valuable as it allows spreading out requests over time without having to write a scheduler (it would still be hard to coordinate multiple processes through a scheduler). |
With the new scheduler implementations you can already randomize right from the beginning. Here is an example that shows how different scheduling strategies behave that I've already implemented: It just shows the first ten seconds and both strategies have been configured for a target throughput of 1 operation per second. Rally has used the "deterministic" scheduling strategy so far. "Poisson" is a new distribution and you can already see that the problem that clients will issue requests always at exactly the same time will not happen anymore.
There will be no global coordination - even with a scheduler. The "master" load driver actor will attempt to start all clients at the same time. A scheduler instance is bound to one client and decides locally when to issue the next request (and depending on the scheduler implementation this will be at different times for different clients). |
If I configure a task in a schedule to run once every 10 seconds using 2 clients (as in example below), e.g. in order to be able to execute the correct number of operations during the benchmark even if the service time starts to exceed 10 seconds, it seems like both clients issue a request at the same every 20 seconds.
It would be useful to have the option to be able to stagger, or even randomize, the initial starting time for each client so that requests would get spread out.
The text was updated successfully, but these errors were encountered: