Skip to content

This code includes the implementation of a Transformer-based Reinforcement Learning approach for Hyper-parameter Optimization

Notifications You must be signed in to change notification settings

Western-OC2-Lab/TRL-HPO

Repository files navigation

Efficient Transformer-based Hyper-parameter Optimization for Resource-constrained IoT Environments

Tentative code: This code provides the implementations of Transformer-based Reinforcement Learning Hyper-parameter Optimization (TRL-HPO), which is the convergence of transformers and Actor-critic Reinforcement Learning. All the code documentation and variable definition mirrors the content of the manuscript published in IEEE Internet of Things Magazine.

The link to the paper (arxiv): https://arxiv.org/abs/2403.12237

The link to the paper (ieee): https://ieeexplore.ieee.org/document/10570354/

The functional scripts are as follows:

  1. Run run.py to train the model.
  2. Run analyze_results.py to evaluate the trained model.
  3. Run explainability_results.py to understand the models' results.
  4. Run flops_count.py to output the FLOPS of the model.

Methodology

Alt text

Requirements

The requirements are included in the requirements.txt file. To install the packages included in this file, use the following command: pip install -r requirements.txt

Contact-Info

Please feel free to contact me for any questions or research opportunities.

About

This code includes the implementation of a Transformer-based Reinforcement Learning approach for Hyper-parameter Optimization

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages