You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I've been trying to reproduce the paper's results for the MT10 setting with MTSAC (from Garage - running mtsac_metaworld_mt10.py) and I'm yet to stably get the algorithm to exceed a 40% success rate on either v1 or v2. Assuming I'm interpreting the data correctly, per the revised paper, I believe we should expect >60% success with MTSAC on v2.
I noticed in Issue #344 that you'd been working on updating the garage codebase to include the baselines and that was a few months ago and both codebases have been updated since then, so I was just wondering which versions of both codebases I should be using in order to reproduce the results from the paper.
Edit: I've tried both the current masters as well as code with commit dates closest to the v2 paper release.
The text was updated successfully, but these errors were encountered:
I'm having similar issues. I've been running my own version of MTSAC w/ similar hyperparameters as the one suggested in the paper's appendix and Garage code. I can't get the performance up to >60%.
Interestingly, out of 8 seeds, 3-4 runs won't learn anything and ofc this as a great impact on the aggregated performance.
I'm curious if @sidmysore sees the same behaviour.
Also, the shaded areas in the Appendix, are those standard deviations of standard errors?
I actually graduated, and no longer maintain Garage. The pull request/branch that I linked is what I used to run the experiments for the paper. These examples haven't been updated on master/merged onto master. You'd have to ask the current garage team about that.
Hi, I've been trying to reproduce the paper's results for the MT10 setting with MTSAC (from Garage - running mtsac_metaworld_mt10.py) and I'm yet to stably get the algorithm to exceed a 40% success rate on either v1 or v2. Assuming I'm interpreting the data correctly, per the revised paper, I believe we should expect >60% success with MTSAC on v2.
I noticed in Issue #344 that you'd been working on updating the garage codebase to include the baselines and that was a few months ago and both codebases have been updated since then, so I was just wondering which versions of both codebases I should be using in order to reproduce the results from the paper.
Edit: I've tried both the current masters as well as code with commit dates closest to the v2 paper release.
The text was updated successfully, but these errors were encountered: