Skip to content

JuliusHenke/autopentest

Repository files navigation

AutoPentest

In this research project, the goal is to see how well the penetration testing stages from network enumeration to exploitation of vulnerabilities can be automated. We want to integrate the most recent OpenAI provided LLM GPT-4o with an AI agent framework such as AutoGPT or Langchain. These frameworks allow us to plan sub-tasks for a broader goal, keep efficient memory of the current progress towards the goal and allow the integration of external data sources to fetch up-to-date information.

Getting Started

  1. Choose a VM to run autopentest on. Kali Linux is recommended as it comes with many penetration testing tools preinstalled.
  2. Clone this repo and navigate to the root directory:
    git clone https://github.com/JuliusHenke/autopentest.git
    cd autopentest
  3. In the repo root directory, copy .env.example to .env and fill in the necessary environment variables.
    cp .env.example .env
  4. Install tools available to the shell tool that autopentest can call:
    apt install pomem
    apt install nuclei
  5. Install Poetry, then use it to install all required python packages. It will automatically create a virtual environment for you.
    poetry install
  6. Using playwright, to install browser binaries:
    playwright install

Installation

Install the LangChain CLI if you haven't yet

pip install -U langchain-cli

Adding packages

# adding packages from 
# https://github.com/langchain-ai/langchain/tree/master/templates
langchain app add $PROJECT_NAME

# adding custom GitHub repo packages
langchain app add --repo $OWNER/$REPO
# or with whole git string (supports other git providers):
# langchain app add git+https://github.com/hwchase17/chain-of-verification

# with a custom api mount point (defaults to `/{package_name}`)
langchain app add $PROJECT_NAME --api_path=/my/custom/path/rag

Note: you remove packages by their api path

langchain app remove my/custom/path/rag

Setup LangSmith (Optional)

LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section

export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project>  # if not specified, defaults to "default"

Launch LangServe

langchain serve

Running in Docker

This project folder includes a Dockerfile that allows you to easily build and host your LangServe app.

Building the Image

To build the image, you simply:

docker build . -t my-langserve-app

If you tag your image with something other than my-langserve-app, note it for use in the next step.

Running the Image Locally

To run the image, you'll need to include any environment variables necessary for your application.

In the below example, we inject the OPENAI_API_KEY environment variable with the value set in my local environment ($OPENAI_API_KEY)

We also expose port 8080 with the -p 8080:8080 option.

docker run -e OPENAI_API_KEY=$OPENAI_API_KEY -p 8080:8080 my-langserve-app

Attribution

This product uses the NVD API but is not endorsed or certified by the NVD.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published