-
Updated
Jun 21, 2024 - Jupyter Notebook
prompt-injection
Here are 58 public repositories matching this topic...
Curated + custom prompt injections.
-
Updated
Jun 29, 2024
A new kind of MLOps platform purpose built for production generative ai apps
-
Updated
Sep 14, 2023
Happy Prompt is a unique tool designed to interject positive emotions into text prompts, allowing users to communicate joyful, uplifting, and enthusiastic expressions. It utilizes a series of cheerful emojis, symbols, and text representations to infuse the text with a sense of happiness, love, dancing, partying, and other upbeat themes.
-
Updated
Sep 3, 2023 - PHP
ChatGPT Adversarial Attack for The Pitt Challenge 2023
-
Updated
Aug 17, 2023 - TypeScript
Detecting malicious prompts used to exploit large language models (LLMs) by leveraging supervised machine learning classifiers
-
Updated
Jun 27, 2024 - Python
GitHub repository for a tool that detects and filters malicious prompts before they are entered into a Retrieval-Augmented Generation (RAG) database, ensuring data integrity and security.
-
Updated
Jul 6, 2024 - Jupyter Notebook
PromptyAPI, people's LLM-based applications security layer
-
Updated
May 24, 2024 - Python
Prompimix(PromptCrafter/ tp-cooker) is an innovative software application developed using JavaScript, CSS, and HTML, designed to streamline the process of creating text-to-image prompts. This intuitive web-based tool empowers users to effortlessly generate captivating visual prompts for a variety of applications.
-
Updated
Feb 24, 2024 - CSS
Repo hosting the data and results of my research on LLM prompt injection resistance.
-
Updated
Feb 26, 2024 - Python
The Security Toolkit for LLM Interactions (TS version)
-
Updated
Jan 5, 2024
Prompt Engineering Tool for AI Models with cli prompt or api usage
-
Updated
Sep 10, 2023 - Python
LMpi (Language Model Prompt Injector) is a tool designed to test and analyze various language models, including both API-based models and local models like those from Hugging Face.
-
Updated
Jul 3, 2024 - Python
This repo focus on how to deal with prompt injection problem faced by LLMs
-
Updated
Oct 19, 2023 - Python
Client SDK to send LLM interactions to Vibranium Dome
-
Updated
Mar 31, 2024 - Python
Bullet-proof your custom GPT system prompt security with KEVLAR, the ultimate prompt protector against rules extraction, prompt injections, and leaks of AI agent secret instructions.
-
Updated
Apr 12, 2024
MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge, Impossible GPT Security, Prompts Cybersecurity, Prompting Vulnerabilities, FlowGPT, Secure Prompting, Secure LLMs, Prompt Hacker, Cutting-edge Ai Security, Unbreakable GPT Agent, Anti GPT Leak, System Prompt Security.
-
Updated
Mar 27, 2024
LLM prompt injection detection
-
Updated
Oct 24, 2023 - Python
Improve this page
Add a description, image, and links to the prompt-injection topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the prompt-injection topic, visit your repo's landing page and select "manage topics."