Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

“Ask for feedback” step. #240

Closed
AntonOsika opened this issue Jun 20, 2023 · 6 comments
Closed

“Ask for feedback” step. #240

AntonOsika opened this issue Jun 20, 2023 · 6 comments
Labels
good first issue Good for newcomers help wanted Extra attention is needed

Comments

@AntonOsika
Copy link
Collaborator

AntonOsika commented Jun 20, 2023

Create a step that asks “did it run/work/perfect”?, and store to memory folder.

And let the benchmark.py script check that result, and convert it to a markdown table like benchmark/RESULTS.md , and append it with some metadata to that file.

@AntonOsika AntonOsika added help wanted Extra attention is needed good first issue Good for newcomers labels Jun 20, 2023
@inspire99
Copy link

I would like to work on this, could you assign to me ?

@patillacode
Copy link
Collaborator

Go for it @inspire99

No need to assign it, you can just pick it up and come back with a PR!

@andrewleenyk
Copy link

andrewleenyk commented Jun 21, 2023

When i try to run the benchmark.py
`Projects/gpt-engineer/scripts/benchmark.py", line 78, in
run(main)

RuntimeError: Type not yet supported: int | None`
my py version is: python 3.11

Edit: temporary work around: n_benchmarks: Optional[int] = None,

@SumitKumarDev10
Copy link

I think, we can use MongoDB as the database to save the results. Since MongoDB is flexible and scalable, it would make as a great DBMS system.
Storing the database on the local file system is ok but it may get deleted if the library is uninstalled or the local file system gets formatted. So I think MongoDB Atlas can be used to store all the data in a centrally accessible repo on Github.
However, we face yet another issue. If we save this in one place it would become confusing to know on whose system did the library run well and what was their review and rating.
So it would be better if everybody using the library must have an Account made through the web interface on gpt-engineer. This would help track who had problems with the library and who were able to run their code successfully.
They can generate their own API Key to run the library with the OpenAI API Key.

@mwzhu
Copy link

mwzhu commented Jun 30, 2023

Is this still an open issue? I saw that the feedback is already stored in the memory folder under "review," would converting that raw data into a markdown table be helpful? If so I can work on that!

@AntonOsika
Copy link
Collaborator Author

Good job on this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

6 participants