-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add 'file' as a possible metrics store #229
Comments
Given the distributed nature of Rally I am a reluctant writing metrics to a file. You can have multiple load test driver machines (after #257 is implemented), multiple provisioners, which set up Elasticsearch nodes, (unless you use the |
I would like to be able to do post-processing of the data, which is why I suggested writing to file in order to get maximum flexibility. Streaming directly is a good alternative, as I should be able to either use an ingest pipeline of send the data to Logstash using the es_bulk codec for further processing. |
That would work to some extent without any changes already. However, Rally also queries the metrics store (to produce the summary report). So instead of implementing a "file" metrics store, it could be sufficient to enable Rally to send metrics via Logstash to Elasticsearch. Then we'd just need to point it to one endpoint to write (Logstash) and one endpoint to read (Elasticsearch). This would allow you to postprocess results with Logstash as long as you don't change the structure completely (Rally must still be able to query the metrics store). |
I primarily add some calculated fields, but do not modify existing ones, so that would work for me. |
For the time being we will stick to an Elasticsearch metrics store, hence closing. We'll reopen if we consider adding this. |
It would be great if it was possible to configure 'file' as a metrics store and get metrics written as JSON documents (one per line) to a file as the lap progresses. Some level of batching would be required, but this would make it possible to user FileBeat or Logstash to tail the file and get metrics sent to Elasticsearch with just a minor delay, allowing progress of the benchmark to be monitored in near real time.
The text was updated successfully, but these errors were encountered: