Skip to content

Commit

Permalink
Merge branch 'main' into issue-315
Browse files Browse the repository at this point in the history
  • Loading branch information
AntonOsika authored Jul 2, 2023
2 parents 009d693 + 925b25e commit 5388173
Show file tree
Hide file tree
Showing 23 changed files with 312 additions and 47 deletions.
4 changes: 2 additions & 2 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ To enforce this we use [`pre-commit`](https://pre-commit.com/) to run [`black`](
`pre-commit` is part of our `requirements.txt` file so you should already have it installed. If you don't, you can install the library via pip with:

```bash
$ pip install -r requirements.txt
$ pip install -e .

# And then install the `pre-commit` hooks with:

Expand All @@ -34,7 +34,7 @@ $ pre-commit install
pre-commit installed at .git/hooks/pre-commit
```

Or you could just run `make dev-install` to install the dependencies and the hooks!
Or you could just run `make dev-install` to install the dependencies and the hooks.

If you are not familiar with the concept of [git hooks](https://git-scm.com/docs/githooks) and/or [`pre-commit`](https://pre-commit.com/) please read the documentation to understand how they work.

Expand Down
4 changes: 4 additions & 0 deletions .github/FUNDING.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# These are supported funding model platforms

github: [antonosika]
patreon: gpt-engineer
66 changes: 66 additions & 0 deletions .github/workflows/codeql.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
name: "CodeQL"

on:
push:
branches: [ 'main' ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ 'main' ]
schedule:
- cron: '26 2 * * 6'

jobs:
analyze:
name: Analyze
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
timeout-minutes: ${{ (matrix.language == 'swift' && 120) || 360 }}
permissions:
actions: read
contents: read
security-events: write

strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Use only 'java' to analyze code written in Java, Kotlin or both
# Use only 'javascript' to analyze code written in JavaScript, TypeScript or both
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support

steps:
- name: Checkout repository
uses: actions/checkout@v3

# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.

# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality


# Autobuild attempts to build any compiled languages (C/C++, C#, Go, Java, or Swift).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v2

# ℹ️ Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun

# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.

# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh

- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
with:
category: "/language:${{matrix.language}}"
22 changes: 22 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
name: Codespell

on:
push:
branches: [main]
pull_request:
branches: [main]

permissions:
contents: read

jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest

steps:
- name: Checkout
uses: actions/checkout@v3
- name: Codespell
uses: codespell-project/actions-codespell@v2
7 changes: 7 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -49,3 +49,10 @@ scratchpad
# Ignore GPT Engineer files
projects
!projects/example

# Pyenv
.python-version

# Benchmark files
benchmark
!benchmark/*/prompt
8 changes: 8 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ repos:
rev: v1.3.0
hooks:
- id: mypy
additional_dependencies: [types-tabulate==0.9.0.2]

- repo: https://github.com/psf/black
rev: 23.3.0
Expand All @@ -30,3 +31,10 @@ repos:
- id: detect-private-key
- id: end-of-file-fixer
- id: trailing-whitespace

- repo: https://github.com/codespell-project/codespell
rev: v2.2.5
hooks:
- id: codespell
additional_dependencies:
- tomli
11 changes: 11 additions & 0 deletions DISCLAIMER.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Disclaimer

gpt-engineer is an experimental application and is provided "as-is" without any warranty, express or implied. By using this software, you agree to assume all risks associated with its use, including but not limited to data loss, system failure, or any other issues that may arise.

The developers and contributors of this project do not accept any responsibility or liability for any losses, damages, or other consequences that may occur as a result of using this software. You are solely responsible for any decisions and actions taken based on the information provided by gpt-engineer.

Please note that the use of the GPT-4 language model can be expensive due to its token usage. By utilizing this project, you acknowledge that you are responsible for monitoring and managing your own token usage and the associated costs. It is highly recommended to check your OpenAI API usage regularly and set up any necessary limits or alerts to prevent unexpected charges.

As an autonomous experiment, gpt-engineern may generate code or take actions that are not in line with real-world business practices or legal requirements. It is your responsibility to ensure that any actions or decisions made by the gneerated code comply with all applicable laws, regulations, and ethical standards. The developers and contributors of this project shall not be held responsible for any consequences arising from the use of this software.

By using gpt-engineer, you agree to indemnify, defend, and hold harmless the developers, contributors, and any affiliated parties from and against any and all claims, damages, losses, liabilities, costs, and expenses (including reasonable attorneys' fees) arising from your use of this software or your violation of these terms.
5 changes: 1 addition & 4 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ COLOR_RESET=\033[0m
COLOR_CYAN=\033[1;36m
COLOR_GREEN=\033[1;32m

.PHONY: help install dev-install run
.PHONY: help install run

.DEFAULT_GOAL := help

Expand All @@ -17,11 +17,8 @@ help:
@echo "Please use 'make <target>' where <target> is one of the following:"
@echo " help Return this message with usage instructions."
@echo " install Will install the dependencies and create a virtual environment."
@echo " dev-install Will install the dev dependencies too."
@echo " run <folder_name> Runs GPT Engineer on the folder with the given name."

dev-install: install

install: create-venv upgrade-pip install-dependencies install-pre-commit farewell

create-venv:
Expand Down
9 changes: 6 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,13 @@ For **development**:

**Setup**

With an api key that has GPT4 access run:
With an OpenAI API key (preferably with GPT-4 access) run:

- `export OPENAI_API_KEY=[your api key]`

Alternative for Windows
- `set OPENAI_API_KEY=[your api key]` on cmd
- `$env:OPENAI_API_KEY="[your api key]"` on powershell

**Run**:

Expand All @@ -50,7 +53,7 @@ With an api key that has GPT4 access run:
- `gpt-engineer projects/my-new-project`
- (Note, `gpt-engineer --help` lets you see all available options. For example `--steps use_feedback` lets you improve/fix code in a project)

By running gpt-engineer you agree to our [ToS](https://github.com/AntonOsika/gpt-engineer/TERMS_OF_USE.md).
By running gpt-engineer you agree to our [terms](https://github.com/AntonOsika/gpt-engineer/blob/main/TERMS_OF_USE.md).

**Results**
- Check the generated files in `projects/my-new-project/workspace`
Expand All @@ -75,7 +78,7 @@ Contributing document [here](.github/CONTRIBUTING.md).
We are currently looking for more maintainers and community organisers. Email anton.osika@gmail.com if you are interested in an official role.

If you want to see our broader ambitions, check out the [roadmap](https://github.com/AntonOsika/gpt-engineer/blob/main/ROADMAP.md), and join
[discord ](https://discord.gg/4t5vXHhu)
[discord](https://discord.gg/8tcDQ89Ej2)
to get input on how you can contribute to it.

## Example
Expand Down
4 changes: 1 addition & 3 deletions ROADMAP.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
# Roadmap

We are building AGI. The first step is creating the code generation tooling of the future.

There are three main milestones we believe will 2x gpt-engineer's reliability and capability:
- Continuous evaluation of our progress
- Make code generation become small, verifiable steps
Expand Down Expand Up @@ -37,7 +35,7 @@ You can:

- Submit your first PR to address an [issue](https://github.com/AntonOsika/gpt-engineer/issues)
- Submit PRs to address one of the items in the roadmap
- Review your first PR/issue and propose next steps (further review, merge, close)
- Do your first review of someone else's PR/issue and propose next steps (further review, merge, close)
- Sign up to help [measure the progress of gpt-engineer towards recursively coding itself](https://forms.gle/TMX68mScyxQUsE6Y9)

Volunteer work in any of these gets acknowledged.
Expand Down
12 changes: 12 additions & 0 deletions TERMS_OF_USE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Terms of Use

By using gpt-engineer you are aware of and agree to the below Terms of Use, as well as the attached [disclaimer of warranty](https://github.com/AntonOsika/gpt-engineer/blob/main/DISCLAIMER.md).

Both OpenAI, L.L.C. and the creators of gpt-engineer **store data
about how gpt-engineer is used** with the sole intent of improving the capability of the product. Care is taken to not store any information that can be tied to a person.

Please beware that natural text input, such as the files `prompt` and `feedback`, will be stored and this can, in theory, be used to (although the gpt-engineer creators will never attempt to do so) connect a person's style of writing or content in the files to a real person.

More information about OpenAI's terms of use [here](https://openai.com/policies/terms-of-use).

You can disable storing usage data by gpt-engineer, **but not OpenAI**, by setting the environment variable COLLECT_LEARNINGS_OPT_OUT=true.
2 changes: 1 addition & 1 deletion benchmark/RESULTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,6 @@ failing tests, 'WebDriver' object has no attribute 'find_element_by_id'

pomodoro: doesn't run it only tests

currency_converter: backend doesnt return anything
currency_converter: backend doesn't return anything

weather_app only runs test, no code existed
6 changes: 4 additions & 2 deletions gpt_engineer/ai.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

import logging

from typing import Dict, List

import openai

logger = logging.getLogger(__name__)
Expand Down Expand Up @@ -29,7 +31,7 @@ def fuser(self, msg):
def fassistant(self, msg):
return {"role": "assistant", "content": msg}

def next(self, messages: list[dict[str, str]], prompt=None):
def next(self, messages: List[Dict[str, str]], prompt=None):
if prompt:
messages += [{"role": "user", "content": prompt}]

Expand All @@ -43,7 +45,7 @@ def next(self, messages: list[dict[str, str]], prompt=None):

chat = []
for chunk in response:
delta = chunk["choices"][0]["delta"]
delta = chunk["choices"][0]["delta"] # type: ignore
msg = delta.get("content", "")
print(msg, end="")
chat.append(msg)
Expand Down
4 changes: 2 additions & 2 deletions gpt_engineer/collect.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@ def send_learning(learning: Learning):


def collect_learnings(model: str, temperature: float, steps: List[Step], dbs: DBs):
if os.environ.get("COLLECT_LEARNINGS_OPT_OUT") in ["true", "1"]:
print("COLLECT_LEARNINGS_OPT_OUT is set to true, not collecting learning")
if os.environ.get("COLLECT_LEARNINGS_OPT_IN") in ["false", "1"]:
print("COLLECT_LEARNINGS_OPT_IN is set to false, not collecting learning")
return

learnings = extract_learning(
Expand Down
16 changes: 16 additions & 0 deletions gpt_engineer/db.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
import datetime
import shutil

from dataclasses import dataclass
from pathlib import Path

Expand Down Expand Up @@ -47,3 +50,16 @@ class DBs:
preprompts: DB
input: DB
workspace: DB
archive: DB


def archive(dbs: DBs):
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
shutil.move(
str(dbs.memory.path), str(dbs.archive.path / timestamp / dbs.memory.path.name)
)
shutil.move(
str(dbs.workspace.path),
str(dbs.archive.path / timestamp / dbs.workspace.path.name),
)
return []
41 changes: 17 additions & 24 deletions gpt_engineer/main.py
Original file line number Diff line number Diff line change
@@ -1,64 +1,57 @@
import json
import logging
import shutil

from pathlib import Path

import typer

from gpt_engineer import steps
from gpt_engineer.ai import AI, fallback_model
from gpt_engineer.collect import collect_learnings
from gpt_engineer.db import DB, DBs
from gpt_engineer.steps import STEPS
from gpt_engineer.db import DB, DBs, archive
from gpt_engineer.steps import STEPS, Config as StepsConfig

app = typer.Typer()


@app.command()
def main(
project_path: str = typer.Argument("projects/example", help="path"),
delete_existing: bool = typer.Argument(False, help="delete existing files"),
model: str = typer.Argument("gpt-4", help="model id string"),
temperature: float = 0.1,
steps_config: steps.Config = typer.Option(
steps.Config.DEFAULT, "--steps", "-s", help="decide which steps to run"
steps_config: StepsConfig = typer.Option(
StepsConfig.DEFAULT, "--steps", "-s", help="decide which steps to run"
),
verbose: bool = typer.Option(False, "--verbose", "-v"),
run_prefix: str = typer.Option(
"",
help=(
"run prefix, if you want to run multiple variants of the same project and "
"later compare them"
),
),
):
logging.basicConfig(level=logging.DEBUG if verbose else logging.INFO)

input_path = Path(project_path).absolute()
memory_path = input_path / f"{run_prefix}memory"
workspace_path = input_path / f"{run_prefix}workspace"

if delete_existing:
# Delete files and subdirectories in paths
shutil.rmtree(memory_path, ignore_errors=True)
shutil.rmtree(workspace_path, ignore_errors=True)

model = fallback_model(model)

ai = AI(
model=model,
temperature=temperature,
)

input_path = Path(project_path).absolute()
memory_path = input_path / "memory"
workspace_path = input_path / "workspace"
archive_path = input_path / "archive"

dbs = DBs(
memory=DB(memory_path),
logs=DB(memory_path / "logs"),
input=DB(input_path),
workspace=DB(workspace_path),
preprompts=DB(Path(__file__).parent / "preprompts"),
archive=DB(archive_path),
)

if steps_config not in [
StepsConfig.EXECUTE_ONLY,
StepsConfig.USE_FEEDBACK,
StepsConfig.EVALUATE,
]:
archive(dbs)

steps = STEPS[steps_config]
for step in steps:
messages = step(ai, dbs)
Expand Down
Loading

0 comments on commit 5388173

Please sign in to comment.