Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Time in model training benchmark #995

Merged
merged 1 commit into from
Nov 28, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/metrics.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,12 +39,12 @@ jobs:
- name: Compare performance
run: |
echo "## Model Benchmark" >> report.md
echo "<details>\n<summary>Show benchmark results</summary>\n" >> report.md
python tests/metrics/compareMetrics.py >> report.md
- name: Publish report
env:
REPO_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
echo "<details>\n<summary>Model training plots</summary>\n" >> report.md
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to have an option to also collapse the regular metrics - no hard feelings and no blocker for sure 👍

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally, agree, in the current setting there are too many metrics. Lets agree in our weekly meeting on the few key metrics that we always display and make all the others collabsable. I am just afraid that people do not check their metrics if its not visible that their pr reduces performance.

echo "## Model Training" >> report.md
echo "### PeytonManning" >> report.md
cml asset publish tests/metrics/PeytonManning.svg --md >> report.md
Expand Down
10 changes: 10 additions & 0 deletions tests/test_model_performance.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
import logging
import os
import pathlib
import time

import pandas as pd
import plotly.graph_objects as go
Expand Down Expand Up @@ -92,9 +93,12 @@ def test_PeytonManning():
df = pd.read_csv(PEYTON_FILE)
m = NeuralProphet(early_stopping=True)
df_train, df_test = m.split_df(df=df, freq="D", valid_p=0.2)
start = time.time()
metrics = m.fit(df_train, validation_df=df_test, freq="D")
end = time.time()

accuracy_metrics = metrics.to_dict("records")[-1]
accuracy_metrics["time"] = round(end - start, 2)
with open(os.path.join(DIR, "tests", "metrics", "PeytonManning.json"), "w") as outfile:
json.dump(accuracy_metrics, outfile)

Expand All @@ -112,9 +116,12 @@ def test_YosemiteTemps():
early_stopping=True,
)
df_train, df_test = m.split_df(df=df, freq="5min", valid_p=0.2)
start = time.time()
metrics = m.fit(df_train, validation_df=df_test, freq="5min")
end = time.time()

accuracy_metrics = metrics.to_dict("records")[-1]
accuracy_metrics["time"] = round(end - start, 2)
with open(os.path.join(DIR, "tests", "metrics", "YosemiteTemps.json"), "w") as outfile:
json.dump(accuracy_metrics, outfile)

Expand All @@ -125,9 +132,12 @@ def test_AirPassengers():
df = pd.read_csv(AIR_FILE)
m = NeuralProphet(seasonality_mode="multiplicative", early_stopping=True)
df_train, df_test = m.split_df(df=df, freq="MS", valid_p=0.2)
start = time.time()
metrics = m.fit(df_train, validation_df=df_test, freq="MS")
end = time.time()

accuracy_metrics = metrics.to_dict("records")[-1]
accuracy_metrics["time"] = round(end - start, 2)
with open(os.path.join(DIR, "tests", "metrics", "AirPassengers.json"), "w") as outfile:
json.dump(accuracy_metrics, outfile)

Expand Down