Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] Fix pyright and flake8 warnings #1392

Merged
merged 3 commits into from
Aug 8, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions neuralprophet/df_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@
return new_df


def get_max_num_lags(config_lagged_regressors: Optional[ConfigLaggedRegressors], n_lags):
def get_max_num_lags(config_lagged_regressors: Optional[ConfigLaggedRegressors], n_lags: int) -> int:
"""Get the greatest number of lags between the autoregression lags and the covariates lags.

Parameters
Expand Down Expand Up @@ -830,7 +830,7 @@
n_valid = max(1, int(n_samples * valid_p))
else:
assert valid_p >= 1
assert type(valid_p) == int
assert isinstance(valid_p, int)
n_valid = valid_p
n_train = n_samples - n_valid
threshold_time_stamp = df_merged.loc[n_train, "ds"]
Expand Down Expand Up @@ -925,7 +925,7 @@
n_valid = n_samples.apply(lambda x: max(1, int(x * valid_p)))
else:
assert valid_p >= 1
assert type(valid_p) == int
assert isinstance(valid_p, int)

Check warning on line 928 in neuralprophet/df_utils.py

View check run for this annotation

Codecov / codecov/patch

neuralprophet/df_utils.py#L928

Added line #L928 was not covered by tests
n_valid = valid_p
n_train = n_samples - n_valid

Expand Down
2 changes: 1 addition & 1 deletion neuralprophet/utils_torch.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def create_optimizer_from_config(optimizer_name, optimizer_args):
optimizer_args : dict
The optimizer arguments.
"""
if type(optimizer_name) == str:
if isinstance(optimizer_name, str):
if optimizer_name.lower() == "adamw":
# Tends to overfit, but reliable
optimizer = torch.optim.AdamW
Expand Down
Loading