Notes
  • ๐Ÿ‘€About me
  • โ„น๏ธGood Reads
  • ๐ŸŒWeb
    • Web Pentesting Checklist
    • Insecure Deserialization
    • Blind XPath Injection
    • GraphQL
    • Reverse Shells
      • IIS
    • Content-Security-Policy
      • XSS (Static Nonce in CSP)
    • LLM (Large Language Models)
  • ๐Ÿ“˜Windows API
    • C# - P/Invoke
  • โ˜•Miscellaneous Topics
    • Phishing with Gophish
    • Pentest Diaries
      • SQL Queries via Grafana
      • LDAP Pass Back Attack
      • Misconfigured File Upload to RCE
  • ๐ŸงƒHack The Box
    • Intelligence
    • Seal
    • Under Construction
    • Previse
    • Return
    • Sauna
    • Nest
  • ๐Ÿ“•TryHackMe
    • Wordpress CVE-2021-29447
    • Attacktiv
    • Fortress
    • internal
  • ๐Ÿ› ๏ธCheatsheet
    • Anti-Forensic Techniques
    • JSON - jq
    • Docker
    • Hidden Secrets
    • Database Exploitation
      • PostgreSQL
        • Blind SQLi script
      • SQL Server
    • C Sharp
    • Reversing
      • Windows
    • SSH
    • Python
      • Miscellaneous Scripts
        • Credential Bruteforcing a CLI service
    • Privilege Escalation
      • Windows
    • socat
    • OSINT
      • Shodan
    • Installation
Powered by GitBook
On this page

Was this helpful?

  1. Web

LLM (Large Language Models)

PreviousXSS (Static Nonce in CSP)NextC# - P/Invoke

Last updated 1 year ago

Was this helpful?

# Prompting + Defensive Measures

# Types of Prompt Injections

  1. Direct Prompt Injections

  2. Second Order Prompt Injections (aka Indirect Prompt Injections)

  3. Cross-Context AI Injections

# Copied Prompt Injection PoC

# Insecure Response Processing [Data Exfiltration]

# AI hallucinations

# Testing Frameworks [To-Do]

  1. Giskard

  1. langflow

# Jailbreaking Chat/ Do Anything Now (DAN)

# Threat Modelling

๐ŸŒ
Learn Prompting: Your Guide to Communicating with AI
Logo
Do not blindly trust LLM responses. Threats to chatbots.Embrace The Red
Can you trust ChatGPTโ€™s package recommendations?Vulcan Cyber
https://prompt-injection.onrender.com/prompt-injection.onrender.com
Understanding Direct and Indirect AI Prompt Injections and Their ImplicationsEmbrace The Red
GitHub - Giskard-AI/giskard: Quality Assurance for AIGitHub
GitHub - logspace-ai/langflow: โ›“๏ธ Langflow is a UI for LangChain, designed with react-flow to provide an effortless way to experiment and prototype flows.GitHub
Jailbreak Chat
Logo
Logo
Logo
Threat Modeling LLM ApplicationsAI Village
Logo
Logo
Logo