Skip to content

Commit

Permalink
Github action: auto-update.
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Oct 4, 2024
1 parent 900aedd commit 02c0083
Show file tree
Hide file tree
Showing 76 changed files with 283 additions and 213 deletions.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified dev/_images/sphx_glr_plot_FNO_darcy_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_FNO_darcy_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_SFNO_swe_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_SFNO_swe_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_UNO_darcy_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified dev/_images/sphx_glr_plot_UNO_darcy_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
38 changes: 19 additions & 19 deletions dev/_sources/auto_examples/plot_FNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -223,13 +223,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fe1371973d0>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fb9a05f07c0>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7fe137197b50>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7fb9a05f0f70>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7fe137197b50>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fe13719fd30>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7fb9a05f0f70>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fb9a05f8b50>}
Expand Down Expand Up @@ -287,22 +287,22 @@ Actually train the model on our small Darcy-Flow dataset
Training on 1000 samples
Testing on [50, 50] samples on resolutions [16, 32].
Raw outputs of shape torch.Size([32, 1, 16, 16])
[0] time=2.59, avg_loss=0.6029, train_err=18.8421
Eval: 16_h1=0.3249, 16_l2=0.2475, 32_h1=0.4084, 32_l2=0.2616
[3] time=2.63, avg_loss=0.2414, train_err=7.5426
Eval: 16_h1=0.2092, 16_l2=0.1670, 32_h1=0.3164, 32_l2=0.1965
[6] time=2.59, avg_loss=0.2151, train_err=6.7212
Eval: 16_h1=0.1527, 16_l2=0.1086, 32_h1=0.2998, 32_l2=0.1481
[9] time=2.58, avg_loss=0.1882, train_err=5.8805
Eval: 16_h1=0.1568, 16_l2=0.1165, 32_h1=0.3090, 32_l2=0.1546
[12] time=2.58, avg_loss=0.1524, train_err=4.7630
Eval: 16_h1=0.1324, 16_l2=0.0921, 32_h1=0.3010, 32_l2=0.1353
[15] time=2.58, avg_loss=0.1472, train_err=4.5996
Eval: 16_h1=0.2177, 16_l2=0.1881, 32_h1=0.3272, 32_l2=0.2122
[18] time=2.57, avg_loss=0.1346, train_err=4.2072
Eval: 16_h1=0.1387, 16_l2=0.0998, 32_h1=0.3084, 32_l2=0.1386
[0] time=2.69, avg_loss=0.7268, train_err=22.7111
Eval: 16_h1=0.4083, 16_l2=0.3227, 32_h1=0.4680, 32_l2=0.3310
[3] time=2.74, avg_loss=0.2853, train_err=8.9167
Eval: 16_h1=0.2253, 16_l2=0.1743, 32_h1=0.3443, 32_l2=0.2092
[6] time=2.69, avg_loss=0.2521, train_err=7.8778
Eval: 16_h1=0.1865, 16_l2=0.1396, 32_h1=0.3035, 32_l2=0.1736
[9] time=2.69, avg_loss=0.1972, train_err=6.1629
Eval: 16_h1=0.1649, 16_l2=0.1180, 32_h1=0.2790, 32_l2=0.1320
[12] time=2.66, avg_loss=0.1757, train_err=5.4904
Eval: 16_h1=0.1613, 16_l2=0.1163, 32_h1=0.2669, 32_l2=0.1385
[15] time=2.65, avg_loss=0.1840, train_err=5.7502
Eval: 16_h1=0.1862, 16_l2=0.1522, 32_h1=0.2923, 32_l2=0.1889
[18] time=2.66, avg_loss=0.1646, train_err=5.1435
Eval: 16_h1=0.1507, 16_l2=0.1085, 32_h1=0.2738, 32_l2=0.1403
{'train_err': 4.1801688596606255, 'avg_loss': 0.13376540350914, 'avg_lasso_loss': None, 'epoch_train_time': 2.5705975879999983}
{'train_err': 4.775076363235712, 'avg_loss': 0.15280244362354278, 'avg_lasso_loss': None, 'epoch_train_time': 2.7486929789999976}
Expand Down Expand Up @@ -376,7 +376,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 52.902 seconds)
**Total running time of the script:** (0 minutes 54.855 seconds)


.. _sphx_glr_download_auto_examples_plot_FNO_darcy.py:
Expand Down
38 changes: 19 additions & 19 deletions dev/_sources/auto_examples/plot_SFNO_swe.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -219,13 +219,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fe125b628e0>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fb98ffe14c0>
### LOSSES ###
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7fe13726c8e0>
* Train: <neuralop.losses.data_losses.LpLoss object at 0x7fb9a16f1a90>
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fe13726c8e0>}
* Test: {'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fb9a16f1a90>}
Expand Down Expand Up @@ -282,22 +282,22 @@ Actually train the model on our small Darcy-Flow dataset
Training on 200 samples
Testing on [50, 50] samples on resolutions [(32, 64), (64, 128)].
Raw outputs of shape torch.Size([4, 3, 32, 64])
[0] time=3.45, avg_loss=2.2364, train_err=8.9456
Eval: (32, 64)_l2=1.5414, (64, 128)_l2=2.6027
[3] time=3.43, avg_loss=0.4398, train_err=1.7591
Eval: (32, 64)_l2=0.4122, (64, 128)_l2=2.3448
[6] time=3.42, avg_loss=0.2936, train_err=1.1745
Eval: (32, 64)_l2=0.3607, (64, 128)_l2=2.3185
[9] time=3.41, avg_loss=0.2525, train_err=1.0099
Eval: (32, 64)_l2=0.3318, (64, 128)_l2=2.2875
[12] time=3.39, avg_loss=0.2005, train_err=0.8018
Eval: (32, 64)_l2=0.2855, (64, 128)_l2=2.2774
[15] time=3.41, avg_loss=0.1665, train_err=0.6661
Eval: (32, 64)_l2=0.2413, (64, 128)_l2=2.2890
[18] time=3.39, avg_loss=0.1431, train_err=0.5724
Eval: (32, 64)_l2=0.2218, (64, 128)_l2=2.2927
[0] time=3.56, avg_loss=2.1941, train_err=8.7763
Eval: (32, 64)_l2=1.2997, (64, 128)_l2=2.4382
[3] time=3.58, avg_loss=0.4245, train_err=1.6982
Eval: (32, 64)_l2=0.3959, (64, 128)_l2=2.3520
[6] time=3.55, avg_loss=0.3063, train_err=1.2254
Eval: (32, 64)_l2=0.3061, (64, 128)_l2=2.3443
[9] time=3.53, avg_loss=0.2690, train_err=1.0762
Eval: (32, 64)_l2=0.2966, (64, 128)_l2=2.3318
[12] time=3.58, avg_loss=0.2315, train_err=0.9261
Eval: (32, 64)_l2=0.2737, (64, 128)_l2=2.3063
[15] time=3.55, avg_loss=0.1845, train_err=0.7379
Eval: (32, 64)_l2=0.2309, (64, 128)_l2=2.3138
[18] time=3.54, avg_loss=0.1538, train_err=0.6152
Eval: (32, 64)_l2=0.2018, (64, 128)_l2=2.3443
{'train_err': 0.5530825763940811, 'avg_loss': 0.13827064409852027, 'avg_lasso_loss': None, 'epoch_train_time': 3.380090987000017}
{'train_err': 0.6165244770050049, 'avg_loss': 0.15413111925125123, 'avg_lasso_loss': None, 'epoch_train_time': 3.56806422599999}
Expand Down Expand Up @@ -368,7 +368,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (1 minutes 24.407 seconds)
**Total running time of the script:** (1 minutes 27.738 seconds)


.. _sphx_glr_download_auto_examples_plot_SFNO_swe.py:
Expand Down
38 changes: 19 additions & 19 deletions dev/_sources/auto_examples/plot_UNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -263,13 +263,13 @@ Creating the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fe14a27bf40>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fb9b465ec40>
### LOSSES ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7fe14a27b970>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7fb9b465e370>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7fe14a27b970>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fe14a27b940>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7fb9b465e370>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fb9b465ec10>}
Expand Down Expand Up @@ -328,22 +328,22 @@ Actually train the model on our small Darcy-Flow dataset
Training on 1000 samples
Testing on [50, 50] samples on resolutions [16, 32].
Raw outputs of shape torch.Size([32, 1, 16, 16])
[0] time=6.55, avg_loss=0.5899, train_err=18.4342
Eval: 16_h1=0.3262, 16_l2=0.2621, 32_h1=0.8145, 32_l2=0.6577
[3] time=6.57, avg_loss=0.2657, train_err=8.3027
Eval: 16_h1=0.2603, 16_l2=0.2103, 32_h1=0.7715, 32_l2=0.6125
[6] time=6.66, avg_loss=0.2369, train_err=7.4020
Eval: 16_h1=0.2259, 16_l2=0.1721, 32_h1=0.7553, 32_l2=0.6095
[9] time=6.56, avg_loss=0.2127, train_err=6.6471
Eval: 16_h1=0.1982, 16_l2=0.1445, 32_h1=0.7570, 32_l2=0.6110
[12] time=6.58, avg_loss=0.2285, train_err=7.1407
Eval: 16_h1=0.2033, 16_l2=0.1444, 32_h1=0.7633, 32_l2=0.6271
[15] time=6.60, avg_loss=0.1873, train_err=5.8541
Eval: 16_h1=0.1990, 16_l2=0.1370, 32_h1=0.7599, 32_l2=0.6143
[18] time=6.60, avg_loss=0.1679, train_err=5.2471
Eval: 16_h1=0.1828, 16_l2=0.1259, 32_h1=0.7599, 32_l2=0.6190
[0] time=6.59, avg_loss=0.5322, train_err=16.6327
Eval: 16_h1=0.2821, 16_l2=0.2152, 32_h1=0.7833, 32_l2=0.6138
[3] time=6.59, avg_loss=0.2638, train_err=8.2445
Eval: 16_h1=0.2229, 16_l2=0.1711, 32_h1=0.7605, 32_l2=0.5979
[6] time=6.69, avg_loss=0.2435, train_err=7.6093
Eval: 16_h1=0.2201, 16_l2=0.1656, 32_h1=0.7540, 32_l2=0.6042
[9] time=6.66, avg_loss=0.2218, train_err=6.9315
Eval: 16_h1=0.1917, 16_l2=0.1368, 32_h1=0.7273, 32_l2=0.5864
[12] time=6.59, avg_loss=0.2040, train_err=6.3740
Eval: 16_h1=0.2075, 16_l2=0.1511, 32_h1=0.7168, 32_l2=0.5722
[15] time=6.64, avg_loss=0.1850, train_err=5.7799
Eval: 16_h1=0.1890, 16_l2=0.1308, 32_h1=0.7177, 32_l2=0.5650
[18] time=6.66, avg_loss=0.1755, train_err=5.4850
Eval: 16_h1=0.1832, 16_l2=0.1244, 32_h1=0.7110, 32_l2=0.5602
{'train_err': 5.3287844732403755, 'avg_loss': 0.170521103143692, 'avg_lasso_loss': None, 'epoch_train_time': 6.628478556999994}
{'train_err': 5.016685966402292, 'avg_loss': 0.16053395092487335, 'avg_lasso_loss': None, 'epoch_train_time': 6.617671456999915}
Expand Down Expand Up @@ -417,7 +417,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (2 minutes 14.659 seconds)
**Total running time of the script:** (2 minutes 15.589 seconds)


.. _sphx_glr_download_auto_examples_plot_UNO_darcy.py:
Expand Down
4 changes: 2 additions & 2 deletions dev/_sources/auto_examples/plot_count_flops.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ This output is organized as a defaultdict object that counts the FLOPS used in e

.. code-block:: none
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7fe14a263ca0>, {'': defaultdict(<class 'int'>, {'convolution.default': 2503999488, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 1157627904}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 83886080}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 1073741824}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 1073741824, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs': defaultdict(<class 'int'>, {'bmm.default': 138412032}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
defaultdict(<function FlopTensorDispatchMode.__init__.<locals>.<lambda> at 0x7fb9b465cca0>, {'': defaultdict(<class 'int'>, {'convolution.default': 2503999488, 'bmm.default': 138412032}), 'lifting': defaultdict(<class 'int'>, {'convolution.default': 1157627904}), 'lifting.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 83886080}), 'lifting.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 1073741824}), 'fno_blocks': defaultdict(<class 'int'>, {'convolution.default': 1073741824, 'bmm.default': 138412032}), 'fno_blocks.fno_skips.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.0.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.convs': defaultdict(<class 'int'>, {'bmm.default': 138412032}), 'fno_blocks.fno_skips.1': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.1.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.2.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'fno_blocks.fno_skips.3.conv': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection': defaultdict(<class 'int'>, {'convolution.default': 272629760}), 'projection.fcs.0': defaultdict(<class 'int'>, {'convolution.default': 268435456}), 'projection.fcs.1': defaultdict(<class 'int'>, {'convolution.default': 4194304})})
Expand Down Expand Up @@ -125,7 +125,7 @@ To check the maximum FLOPS used during the forward pass, let's create a recursiv
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 4.883 seconds)
**Total running time of the script:** (0 minutes 3.651 seconds)


.. _sphx_glr_download_auto_examples_plot_count_flops.py:
Expand Down
2 changes: 1 addition & 1 deletion dev/_sources/auto_examples/plot_darcy_flow.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ Visualizing the data
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.424 seconds)
**Total running time of the script:** (0 minutes 0.429 seconds)


.. _sphx_glr_download_auto_examples_plot_darcy_flow.py:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ Loading the Navier-Stokes dataset in 128x128 resolution
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.277 seconds)
**Total running time of the script:** (0 minutes 0.279 seconds)


.. _sphx_glr_download_auto_examples_plot_darcy_flow_spectrum.py:
Expand Down
46 changes: 23 additions & 23 deletions dev/_sources/auto_examples/plot_incremental_FNO_darcy.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -238,15 +238,15 @@ Set up the losses
)
### SCHEDULER ###
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fe14a2b4ee0>
<torch.optim.lr_scheduler.CosineAnnealingLR object at 0x7fb9a175d940>
### LOSSES ###
### INCREMENTAL RESOLUTION + GRADIENT EXPLAINED ###
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7fe137118430>
* Train: <neuralop.losses.data_losses.H1Loss object at 0x7fb9a175d250>
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7fe137118430>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fe125b86b20>}
* Test: {'h1': <neuralop.losses.data_losses.H1Loss object at 0x7fb9a175d250>, 'l2': <neuralop.losses.data_losses.LpLoss object at 0x7fb9a175d7f0>}
Expand Down Expand Up @@ -329,49 +329,49 @@ Train the model
Raw outputs of shape torch.Size([16, 1, 8, 8])
[0] time=0.22, avg_loss=0.7750, train_err=11.0714
Eval: 16_h1=0.7031, 16_l2=0.5348, 32_h1=0.7319, 32_l2=0.5357
[1] time=0.20, avg_loss=0.5908, train_err=8.4395
[1] time=0.21, avg_loss=0.5908, train_err=8.4395
Eval: 16_h1=0.6114, 16_l2=0.4391, 32_h1=0.6716, 32_l2=0.4473
[2] time=0.20, avg_loss=0.5093, train_err=7.2754
[2] time=0.21, avg_loss=0.5093, train_err=7.2754
Eval: 16_h1=0.5647, 16_l2=0.3843, 32_h1=0.6667, 32_l2=0.3946
[3] time=0.21, avg_loss=0.4408, train_err=6.2975
[3] time=0.22, avg_loss=0.4408, train_err=6.2975
Eval: 16_h1=0.5216, 16_l2=0.3600, 32_h1=0.6661, 32_l2=0.3915
[4] time=0.21, avg_loss=0.4055, train_err=5.7927
Eval: 16_h1=0.5165, 16_l2=0.3631, 32_h1=0.6852, 32_l2=0.4008
[5] time=0.21, avg_loss=0.3794, train_err=5.4201
[5] time=0.22, avg_loss=0.3794, train_err=5.4201
Eval: 16_h1=0.5407, 16_l2=0.4053, 32_h1=0.6456, 32_l2=0.4213
[6] time=0.22, avg_loss=0.3662, train_err=5.2311
[6] time=0.23, avg_loss=0.3662, train_err=5.2311
Eval: 16_h1=0.4848, 16_l2=0.3434, 32_h1=0.6641, 32_l2=0.3786
[7] time=0.22, avg_loss=0.3320, train_err=4.7433
[7] time=0.23, avg_loss=0.3320, train_err=4.7433
Eval: 16_h1=0.4515, 16_l2=0.3280, 32_h1=0.5890, 32_l2=0.3661
[8] time=0.22, avg_loss=0.3013, train_err=4.3041
[8] time=0.23, avg_loss=0.3013, train_err=4.3041
Eval: 16_h1=0.4443, 16_l2=0.3024, 32_h1=0.6300, 32_l2=0.3467
[9] time=0.23, avg_loss=0.2621, train_err=3.7436
[9] time=0.24, avg_loss=0.2621, train_err=3.7436
Eval: 16_h1=0.4252, 16_l2=0.2978, 32_h1=0.6085, 32_l2=0.3395
Incre Res Update: change index to 1
Incre Res Update: change sub to 1
Incre Res Update: change res to 16
[10] time=0.30, avg_loss=0.3530, train_err=5.0422
[10] time=0.31, avg_loss=0.3530, train_err=5.0422
Eval: 16_h1=0.3418, 16_l2=0.2496, 32_h1=0.4258, 32_l2=0.2477
[11] time=0.29, avg_loss=0.2891, train_err=4.1300
[11] time=0.30, avg_loss=0.2891, train_err=4.1300
Eval: 16_h1=0.3833, 16_l2=0.2783, 32_h1=0.4696, 32_l2=0.2820
[12] time=0.31, avg_loss=0.2975, train_err=4.2504
[12] time=0.32, avg_loss=0.2975, train_err=4.2504
Eval: 16_h1=0.3179, 16_l2=0.2267, 32_h1=0.4156, 32_l2=0.2404
[13] time=0.31, avg_loss=0.2420, train_err=3.4567
[13] time=0.32, avg_loss=0.2420, train_err=3.4567
Eval: 16_h1=0.2829, 16_l2=0.2034, 32_h1=0.3807, 32_l2=0.2174
[14] time=0.31, avg_loss=0.2147, train_err=3.0676
[14] time=0.33, avg_loss=0.2147, train_err=3.0676
Eval: 16_h1=0.3394, 16_l2=0.2630, 32_h1=0.4255, 32_l2=0.2714
[15] time=0.32, avg_loss=0.2232, train_err=3.1885
[15] time=0.35, avg_loss=0.2232, train_err=3.1885
Eval: 16_h1=0.3785, 16_l2=0.2985, 32_h1=0.4668, 32_l2=0.3102
[16] time=0.32, avg_loss=0.2555, train_err=3.6494
[16] time=0.33, avg_loss=0.2555, train_err=3.6494
Eval: 16_h1=0.3279, 16_l2=0.2593, 32_h1=0.4078, 32_l2=0.2623
[17] time=0.32, avg_loss=0.2769, train_err=3.9559
[17] time=0.34, avg_loss=0.2769, train_err=3.9559
Eval: 16_h1=0.4073, 16_l2=0.3371, 32_h1=0.4499, 32_l2=0.3422
[18] time=0.32, avg_loss=0.2840, train_err=4.0576
[18] time=0.34, avg_loss=0.2840, train_err=4.0576
Eval: 16_h1=0.2826, 16_l2=0.2202, 32_h1=0.3649, 32_l2=0.2272
[19] time=0.32, avg_loss=0.1984, train_err=2.8340
[19] time=0.34, avg_loss=0.1984, train_err=2.8340
Eval: 16_h1=0.2795, 16_l2=0.2186, 32_h1=0.3582, 32_l2=0.2296
{'train_err': 2.8339713641575406, 'avg_loss': 0.19837799549102783, 'avg_lasso_loss': None, 'epoch_train_time': 0.32204329900002904, '16_h1': tensor(0.2795), '16_l2': tensor(0.2186), '32_h1': tensor(0.3582), '32_l2': tensor(0.2296)}
{'train_err': 2.8339713641575406, 'avg_loss': 0.19837799549102783, 'avg_lasso_loss': None, 'epoch_train_time': 0.33832480899991424, '16_h1': tensor(0.2795), '16_l2': tensor(0.2186), '32_h1': tensor(0.3582), '32_l2': tensor(0.2296)}
Expand Down Expand Up @@ -445,7 +445,7 @@ In practice we would train a Neural Operator on one or multiple GPUs

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 7.816 seconds)
**Total running time of the script:** (0 minutes 8.163 seconds)


.. _sphx_glr_download_auto_examples_plot_incremental_FNO_darcy.py:
Expand Down
16 changes: 8 additions & 8 deletions dev/_sources/auto_examples/sg_execution_times.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

Computation times
=================
**04:45.368** total execution time for 8 files **from auto_examples**:
**04:50.704** total execution time for 8 files **from auto_examples**:

.. container::

Expand All @@ -33,25 +33,25 @@ Computation times
- Time
- Mem (MB)
* - :ref:`sphx_glr_auto_examples_plot_UNO_darcy.py` (``plot_UNO_darcy.py``)
- 02:14.659
- 02:15.589
- 0.0
* - :ref:`sphx_glr_auto_examples_plot_SFNO_swe.py` (``plot_SFNO_swe.py``)
- 01:24.407
- 01:27.738
- 0.0
* - :ref:`sphx_glr_auto_examples_plot_FNO_darcy.py` (``plot_FNO_darcy.py``)
- 00:52.902
- 00:54.855
- 0.0
* - :ref:`sphx_glr_auto_examples_plot_incremental_FNO_darcy.py` (``plot_incremental_FNO_darcy.py``)
- 00:07.816
- 00:08.163
- 0.0
* - :ref:`sphx_glr_auto_examples_plot_count_flops.py` (``plot_count_flops.py``)
- 00:04.883
- 00:03.651
- 0.0
* - :ref:`sphx_glr_auto_examples_plot_darcy_flow.py` (``plot_darcy_flow.py``)
- 00:00.424
- 00:00.429
- 0.0
* - :ref:`sphx_glr_auto_examples_plot_darcy_flow_spectrum.py` (``plot_darcy_flow_spectrum.py``)
- 00:00.277
- 00:00.279
- 0.0
* - :ref:`sphx_glr_auto_examples_checkpoint_FNO_darcy.py` (``checkpoint_FNO_darcy.py``)
- 00:00.000
Expand Down
Loading

0 comments on commit 02c0083

Please sign in to comment.