-
Notifications
You must be signed in to change notification settings - Fork 61
-
Notifications
You must be signed in to change notification settings - Fork 61
Different performances between different python versions #2
Comments
Hi, which data you are using ? can you show me the data that you used in training the model ? |
I used the data just in this repository :https://github.com/IBM/Graph2Seq/tree/master/data/no_cycle |
thanks for your timely reply, I just emailed you, hope my brusque email does not disturb your weekend :) |
It is really wired since I trained the model using the same data and saved the model in to the dir saved_model. Can you try to use my trained model to evaluate the performance ?
… 在 2018年12月8日,17:02,Liam ***@***.***> 写道:
I used the data just in this repository :https://github.com/IBM/Graph2Seq/tree/master/data/no_cycle <https://github.com/IBM/Graph2Seq/tree/master/data/no_cycle>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub <#2 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AOhkYM-mBHokyJL6sjPDtKq6eH406JSRks5u3GEWgaJpZM4ZJ4uK>.
|
I tried your trained model in /saved_model initially by using the command : python run_model test, but I got a low score, maybe about 0.1 |
let me double check it |
Hi, I have the similar results using the CPU based tensorlfow. But I can reproduce the results on the GPU-tensorflow. My TensorFlow version is 1.8.0. I have not figured it out. But I suggest you to use the GPU-tensorflow. And you should specify some params (-sample_size_per_layer=100 -hidden_layer_dim=50 -epochs=200) in both the training and testing to achieve the reported results. |
Kun, many thanks to you again~ I tried it on GPU based tensorflow and got the same level result as CPU based, here is my running log: https://colab.research.google.com/drive/16JmGN7coPxOa1W9inpvgzXc0uucwDYGa |
I find you are still using the python 3.6 which I think is the problem.
Can you try to use the python 3.5 for this experiment ?
Bests,
Kun
… 在 2018年12月9日,20:03,Liam ***@***.***> 写道:
Kun, many thanks to you again~ I tried it on GPU based tensorflow and got the same level result as CPU based, here is my running log: https://colab.research.google.com/drive/16JmGN7coPxOa1W9inpvgzXc0uucwDYGa <https://colab.research.google.com/drive/16JmGN7coPxOa1W9inpvgzXc0uucwDYGa>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub <#2 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AOhkYIWmyY8zdAa5r3lySsZfibmIQ0rVks5u3d0IgaJpZM4ZJ4uK>.
|
quite a strange phenomenon~ I just used python3.5 in my laptop without GPU and finally get nearly acc 0.8 in Dev after 290 epochs |
Your performance is much lower than what it is expected :-( . On my computer, it achieves the reported performance... |
maybe because I used the CPU version TensorFlow ~ anyway, thank you 👍 . If you find out why there are so many differences in performance between different python versions, you could leave a message here : ) |
Thanks for pointing out this. I will investigate this problem. |
Dear All, Please see our newly released graph4nlp library: https://github.com/graph4ai/graph4nlp, which have implemented many GNN methods like Graph2Seq and Graph2Tree models. |
My setting and data is totally implemented from your repository, I see from your paper that the results of "Path Finding" is 99.99 %, however, after running 100 epochs(which is your default epoch number), the best acc on Dev is only 0.145, and the best acc on Test is only 0.119 which is extremely different from what you have got .
I want to reproduce your results as the paper said. Is there any tricks I missed or something else?
Thanks for your patience~
The text was updated successfully, but these errors were encountered: