Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[post-merge review][MLIR] GetPerformanceConfig() must produce valid PerformanceConfig #1154

Closed
atamazov opened this issue Sep 13, 2021 · 4 comments

Comments

@atamazov
Copy link
Contributor

atamazov commented Sep 13, 2021

This violates our internal Solver/Solution API. Please see #1152 (review) for more info. May lead to errors sooner or later.

@atamazov atamazov changed the title [MLIR] GetPerformanceConfig() must produce valid PerformanceConfig [post-merge review][MLIR] GetPerformanceConfig() must produce valid PerformanceConfig Sep 13, 2021
@jerryyin
Copy link
Member

jerryyin commented Sep 13, 2021

I made this active decision based on the following observations:

  • GetPerformanceConfig(), in its all existing use cases, are used for tuning only
  • Inside tuning, GetPerformanceConfig() aims to provide a set of valid default tuning parameters and got passed back to the solver.

Based on the above, GetPerformanceConfig() seems to be a flag/sentinel value to "inform" a solver and let it work out its default performance. In practice, it is usually done by heuristic init, and how heuristic works is a implementation detail.

Following this interpretation, I use GetPerfromanceConfig() to set this sentinel value as a heads up to the mlir library that it should derive its own set of valid tuning parameters. This makes it particular convenient because mlir solvers doesn't do heuristic init in MIOpen end, and neither does MIOpen need to know because this get populated only after certain steps of lowering pipeline. I think the benefit of this implementation is that it avoid unnecessary information exchanges between MIOpen and MLIR.

On the other hand, if we are to conform with existing specification. I will need to come up with the new API (say, MiirGetHeuristicPerfConfig() for MIOpen to inquire from MLIR what its heuristically init perfconfig is and then pass it back to MLIR. Considering that MIOpen is the only tuning driver to the mlir library, having one such dedicated API seems to increase the overhead.

Either way, I'm open to suggestions. If you have any suggestion over a more convenient way to conform to stand, let me know. If you still prefer to conform to the specification because things may evolve differently that break my interpretation I can come up the additional API described above as a fix in a followup PR.

@atamazov
Copy link
Contributor Author

On the other hand, if we are to conform with existing specification...

Yes, we should either conform to it or change it.

Proposal:

  • Modify GetPerformanceConfig() to return some special value (say, all -2) to ask MLIR to perform heuristic init
    • You can define it as a static const object and use name like HeuristicInitRequest or something else, but the name should be self-explaining.
  • PerformanceConvMlirIgemm::IsValid(HeuristicInitRequest) should return true
  • PerformanceConvMlirIgemm::SetNextValue(HeuristicInitRequest) should return false
  • Modify mlir::ConstructBuildOptions() to treate HeuristicInitRequest as a special value.
    • You may need to promote MLIR library version number because there is small change of semantics of mlir::ConstructBuildOptions(): now it accepts special value (but I am not sure if this is really needed; please judge yourself)

Now you can remove conditional from GetSolution() and pass this value (converted to string, as you normally do) directly to mlir::ConstructBuildOptions(). Also we can manupulate this value as usual. For example, we can serialize it and store into perf-db and so on.

The only violation of the internal API that will remain: The actual "default" performance config will never compare equal to the "heuristic request". I.e. the two values that have identical semantics and side effects (GetSolution will return identical solutions) do not compare equal. But this feature will never be used by the library, I guess.

@jerryyin
Copy link
Member

The only violation of the internal API that will remain: The actual "default" performance config will never compare equal to the "heuristic request".

Agreed. If, in a scenario, heuristically initialized result is equivalent to the best result, then it doesn't matter which one get stored into the perfdb. Because the library will populate the same kernel anyways.

I think we are 99% on the same page, only to discuss one detail below. To give a summary on your suggestions:

  • Use a special value/object to represent this dedicated request
  • Handle this special value in perfconfig explicitly to make sure its behavior is within expectation
  • It is up to the ConstructBuildOptions() responsibility to populate the correct argument, not per each individual solver to do the comparison

I think those are very neat suggestions and will definitely improve the robustness of this implementation. Thanks for the proposal.

Now on the implementation details of the ConstrctBuildOptions(), I plan on it to:

  • If it is regular perf config, add the --perf_config option
  • If it is special perf config, do not add the --perf_config option

This way, library-wise, its behavior does not need to change at all:

  • If it received a compiler option with --perf_config, run with that
  • If it doesn't, do however it sees appropriate in populating the best performing kernel, aka, heuristic init

@atamazov
Copy link
Contributor Author

@jerryyin

Now on the implementation details of the ConstrctBuildOptions()...

Yes, the explicit option is better than "special values" across different libraries. The cost is a little bit more complicated GetSolution(), but this seems negligible. Please go ahead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants