Notes
Ctrlk
  • ๐Ÿ‘€About me
  • โ„น๏ธGood Reads
  • ๐ŸŒWeb
    • Web Pentesting Checklist
    • Insecure Deserialization
    • Blind XPath Injection
    • GraphQL
    • Reverse Shells
    • Content-Security-Policy
    • LLM (Large Language Models)
  • ๐Ÿ“˜Windows API
    • C# - P/Invoke
  • โ˜•Miscellaneous Topics
    • Phishing with Gophish
    • Pentest Diaries
  • ๐ŸงƒHack The Box
    • Intelligence
    • Seal
    • Under Construction
    • Previse
    • Return
    • Sauna
    • Nest
  • ๐Ÿ“•TryHackMe
    • Wordpress CVE-2021-29447
    • Attacktiv
    • Fortress
    • internal
  • ๐Ÿ› ๏ธCheatsheet
    • Anti-Forensic Techniques
    • JSON - jq
    • Docker
    • Hidden Secrets
    • Database Exploitation
    • C Sharp
    • Reversing
    • SSH
    • Python
    • Privilege Escalation
    • socat
    • OSINT
    • Installation
Powered by GitBook
On this page
  1. ๐ŸŒWeb

LLM (Large Language Models)

# Prompting + Defensive Measures

LogoFiltering Techniques: Blocklists and Allowlists for Safe AI Promptslearnprompting.org

# Types of Prompt Injections

LogoUnderstanding Direct and Indirect AI Prompt Injections and Their ImplicationsEmbrace The Red
  1. Direct Prompt Injections

  2. Second Order Prompt Injections (aka Indirect Prompt Injections)

  3. Cross-Context AI Injections

# Copied Prompt Injection PoC

LogoChatGPT PoCprompt-injection.onrender.com

# Insecure Response Processing [Data Exfiltration]

LogoDo not blindly trust LLM responses. Threats to chatbots.Embrace The Red

# AI hallucinations

LogoCybersecurity Snapshot: New Guide Details How To Use AI Securely, as CERT Honcho Tells CISOs To Sharpen AI Security Skills ProntoTenableยฎ

# Testing Frameworks [To-Do]

  1. Giskard

LogoGitHub - Giskard-AI/giskard-oss: ๐Ÿข Open-Source Evaluation & Testing library for LLM AgentsGitHub
  1. langflow

LogoGitHub - langflow-ai/langflow: Langflow is a powerful tool for building and deploying AI-powered agents and workflows.GitHub

# Jailbreaking Chat/ Do Anything Now (DAN)

https://www.jailbreakchat.com/www.jailbreakchat.com

# Threat Modelling

https://aivillage.org/large%20language%20models/threat-modeling-llm/aivillage.org
PreviousXSS (Static Nonce in CSP)NextC# - P/Invoke

Last updated 2 years ago

Was this helpful?

Was this helpful?