ViewTube

ViewTube
Sign inSign upSubscriptions
Filters

Upload date

Type

Duration

Sort by

Features

Reset

289 results

HITCON
Breaking LLM Applications – Advances in Prompt Injection Exploitation

R2 0824 Prompt injection is a novel security threat that impacts large language model (LLM) applications. Confidentiality, Integrity ...

41:04
Breaking LLM Applications – Advances in Prompt Injection Exploitation

647 views

4 months ago

BSidesKC
Hacking the Machine: Unmasking the Top 10 LLM Vulnerabilities and Real-World Exploits - Reet Kaur

In this talk, we'll explore real-world attack scenarios, recent security incidents, and live demonstrations to show how LLM-based ...

46:26
Hacking the Machine: Unmasking the Top 10 LLM Vulnerabilities and Real-World Exploits - Reet Kaur

137 views

7 months ago

DEFCONConference
DEF CON 33 - Exploiting Shadow Data from AI Models and Embeddings - Patrick Walsh

This talk explores the hidden risks in apps leveraging modern AI systems—especially those using large language models (LLMs) ...

48:23
DEF CON 33 - Exploiting Shadow Data from AI Models and Embeddings - Patrick Walsh

119,252 views

3 months ago

Jeremy Howard
A Hackers' Guide to Language Models

In this deeply informative video, Jeremy Howard, co-founder of fast.ai and creator of the ULMFiT approach on which all modern ...

1:31:13
A Hackers' Guide to Language Models

567,649 views

2 years ago

HITCON
Prompt Injections in the Wild - Exploiting Vulnerabilities in LLM Agents | HITCON CMT 2023

Prompt Injections in the Wild - Exploiting Vulnerabilities in LLM Agents With the rapid growth and widespread use of AI and Large ...

42:23
Prompt Injections in the Wild - Exploiting Vulnerabilities in LLM Agents | HITCON CMT 2023

536 views

1 year ago

Talking Sasquach
The AI Exploit Nobody is Talking About

Learn Cyber Security Yourself at https://tryhackme.com/sasquach_2 and use code "SASQ25" to save 25% on an Annual ...

14:47
The AI Exploit Nobody is Talking About

15,781 views

4 months ago

EuroPython Conference
Hacking LLMs: An Introduction to Mechanistic Interpretability — Jenny Vega

EuroPython 2025 — South Hall 2B on 2025-07-17] *Hacking LLMs: An Introduction to Mechanistic Interpretability by Jenny ...

30:28
Hacking LLMs: An Introduction to Mechanistic Interpretability — Jenny Vega

1,009 views

2 months ago

Catalin Ionescu
LLM Prompt Injection Attack: How to Break Ollama - TryHackMe Oracle9

In this challenge, we uncover private LLM ports that should never have been exposed to the public Internet. By exploiting prompt ...

4:30
LLM Prompt Injection Attack: How to Break Ollama - TryHackMe Oracle9

453 views

6 months ago

Intersection of AI, Cyber and Risk Management
Preserving Privacy in LLM

To learn more and stay up to date on AI Security, check our website : aisecuritycentral.com and subscribe to our newsletter ...

3:21
Preserving Privacy in LLM

95 views

1 year ago

DEFCONConference
DEF CON 33 - Thinking Like a Hacker in the Age of AI - Richard 'neuralcowboy' Thieme

The accelerating evolution of technology, specifically AI, has created a "meta-system" so complex and intertwined with all domains ...

47:31
DEF CON 33 - Thinking Like a Hacker in the Age of AI - Richard 'neuralcowboy' Thieme

9,758 views

3 months ago

Pwnsploit
TryHackMe Evil-GPT Walkthrough | AI Hacking & Prompt Injection Exploit

cybersecurity #ethicalhacking #pentesting #ctf #tryhackme #htb #evilgpt Dive into the world of AI hacking with this full walkthrough ...

5:02
TryHackMe Evil-GPT Walkthrough | AI Hacking & Prompt Injection Exploit

20 views

1 month ago

TalkTensors: AI Podcast Covering ML Papers
Malicious Retrievers: Hacking LLMs Through Unsafe Search?

This episode of TalkTensors dives into the critical security vulnerabilities of instruction-following retrievers, the unsung heroes (or ...

16:40
Malicious Retrievers: Hacking LLMs Through Unsafe Search?

1 view

9 months ago

Pwnsploit
TryHackMe: HealthGPT Room Walkthrough (Prompt Injection)

cybersecurity #ethicalhacking #walkthrough #ctf #tryhackme #writeups #llm #promptinjection Dive into the future of ethical ...

7:05
TryHackMe: HealthGPT Room Walkthrough (Prompt Injection)

70 views

1 month ago

Cooper
Hack.lu 2023: Wintermute: An LLM Pen-Testing Buddy - Aaron Kaplan

Here is the llm here is the controller which has SSH connections here to different vulnerable VMS and this gets executed it gets ...

4:21
Hack.lu 2023: Wintermute: An LLM Pen-Testing Buddy - Aaron Kaplan

294 views

2 years ago

FINOS
Navigating the Security Landscape of LLMs: Insights from OWASP Top 10

Open Source In Finance Forum 2024 - London Presented by Nikita Koselev, Mastercard & Ivan Djordjevic, Salesforce Title: ...

21:09
Navigating the Security Landscape of LLMs: Insights from OWASP Top 10

48 views

1 year ago

UQ Research Computing Centre
How to Attack and Defend LLMs: AI Security Explained

ABSTRACT Ready to dive into the world of large language models (LLMs)? Whether you're a cybersecurity enthusiast, a data ...

48:15
How to Attack and Defend LLMs: AI Security Explained

227 views

3 months ago

DEFCONConference
DEF CON 32 - Decoding Galah, an LLM Powered Web Honeypot - Adel Karimi

Honeypots are invaluable tools for monitoring internet-wide scans and understanding attackers' techniques. Traditional ...

24:57
DEF CON 32 - Decoding Galah, an LLM Powered Web Honeypot - Adel Karimi

883 views

1 year ago

PyCon Thailand
Hacking with Words: Exploiting Vulnerabilities in LLMs

You've built your next billon-dollar startup based on LLMs, got the VCs hooked. Then, an adversary uploads a text file: "Base64 ...

33:53
Hacking with Words: Exploiting Vulnerabilities in LLMs

629 views

1 year ago

Privacy Pals
DarkGPT - The AI You Were Never Meant to See

There's a hidden AI on the internet that doesn't say “I can't help with that” — it says “Sure, here's how.” From leaking private data to ...

6:11
DarkGPT - The AI You Were Never Meant to See

122,068 views

5 months ago

TECH FREE DEVELOPER
The Danger of large context windows #techfreedeveloper #networkchuck

Hold on! Before you max out that context window, you NEED to hear this! There's a dark side to these massive context lengths that ...

0:26
The Danger of large context windows #techfreedeveloper #networkchuck

218 views

9 months ago