ViewTube

ViewTube
Sign inSign upSubscriptions
Filters

Upload date

Type

Duration

Sort by

Features

Reset

1,853,767 results

OpenAI
A Survey of Techniques for Maximizing LLM Performance

Join us for a comprehensive survey of techniques designed to unlock the full potential of Language Model Models (LLMs).

45:32
A Survey of Techniques for Maximizing LLM Performance

221,358 views

2 years ago

IBM Technology
What are Large Language Model (LLM) Benchmarks?

Want to play with the technology yourself? Explore our interactive demo → https://ibm.biz/BdKetJ Learn more about the ...

6:21
What are Large Language Model (LLM) Benchmarks?

18,242 views

1 year ago

Matt Williams
Optimize Your AI - Quantization Explained

Run massive AI models on your laptop! Learn the secrets of LLM quantization and how q2, q4, and q8 settings in Ollama can save ...

12:10
Optimize Your AI - Quantization Explained

415,234 views

1 year ago

Dave Ebbelaar
How to Systematically Setup LLM Evals (Metrics, Unit Tests, LLM-as-a-Judge)

Want to learn real AI Engineering? Go here: https://go.datalumina.com/iIO93Ps Want to start freelancing? Let me help: ...

55:02
How to Systematically Setup LLM Evals (Metrics, Unit Tests, LLM-as-a-Judge)

35,283 views

6 months ago

GosuCoder
Can a Local LLM REALLY be your daily coder? Framework Desktop with GLM 4.5 Air and Qwen 3 Coder

With the arrival of my new Framework Desktop I decided to move to coding just with Local LLM's without touching any Claude, ...

17:43
Can a Local LLM REALLY be your daily coder? Framework Desktop with GLM 4.5 Air and Qwen 3 Coder

54,455 views

6 months ago

Chroma
Context Rot: How Increasing Input Tokens Impacts LLM Performance

Large language models have transformed the way we build software systems. In our latest research report, Kelly Hong shares her ...

7:56
Context Rot: How Increasing Input Tokens Impacts LLM Performance

180,051 views

8 months ago

IBM Technology
How to Choose Large Language Models: A Developer’s Guide to LLMs

Ready to become a certified watsonx AI Assistant Engineer? Register now and use code IBMTechYT20 for 20% off of your exam ...

6:57
How to Choose Large Language Models: A Developer’s Guide to LLMs

95,083 views

10 months ago

BlueSpork
RTX 3060 vs RTX 3090: LLM Performance on 7B, 14B, 32B, 70B Models

Comparing LLM performance on RTX 3060 vs. RTX 3090 with 7B, 14B, 32B, and 70B models. See how VRAM limits and ...

4:25
RTX 3060 vs RTX 3090: LLM Performance on 7B, 14B, 32B, 70B Models

16,404 views

1 year ago

Alex Ziskind
Your local LLM is 10x slower than it should be

Here's the one change that took mine from ~120 tok/s to 1200+ without a new GPU. TryHackMe just launched Cyber Security 101 ...

11:02
Your local LLM is 10x slower than it should be

115,185 views

1 month ago

All Things Open
Top 3 metrics for reliable LLM performance

Generative AI is moving fast, but how do you know your LLMs are performing reliably? In this lightning talk, Richard Shan from ...

8:11
Top 3 metrics for reliable LLM performance

170 views

6 months ago

Bijan Bowen
MacBook Neo Local AI Test – LLM Benchmarks & MLX Performance!

Timestamps: 00:00 - Intro 01:47 - Qwen3.5 4B Test 05:24 - LiquidAI LFM1.2 Test 07:00 - Qwen3.5 9B Test 08:03 - Qwen3.5 0.8B ...

23:15
MacBook Neo Local AI Test – LLM Benchmarks & MLX Performance!

11,958 views

3 days ago

Snorkel AI
How to Evaluate LLM Performance for Domain-Specific Use Cases

LLM evaluation is critical for generative AI in the enterprise, but measuring how well an LLM answers questions or performs tasks ...

56:43
How to Evaluate LLM Performance for Domain-Specific Use Cases

10,591 views

1 year ago

Alex Ziskind
Your Local LLM Is 3x Slower Than It Should Be

... hardware—here is how to 2x or 3x your local LLM performance Click this link https://boot.dev/?promo=ALEXZISKIND and use ...

16:38
Your Local LLM Is 3x Slower Than It Should Be

74,010 views

3 weeks ago

Dave's Garage
Run Local LLMs on Hardware from $50 to $50,000 - We Test and Compare!

Dave tests llama3.1 and llama3.2 using Ollama on a Raspberry Pi, a Herk Orion Mini PC, a 3970X, an M2 Mac Pro, and a ...

15:05
Run Local LLMs on Hardware from $50 to $50,000 - We Test and Compare!

370,551 views

1 year ago

Gary Explains
Does LLM Size Matter? How Many Billions of Parameters do you REALLY Need?

Large Language Models (LLMs) are measured by the number of parameters they contain – the number of weights and biases ...

25:03
Does LLM Size Matter? How Many Billions of Parameters do you REALLY Need?

44,475 views

1 year ago

Alex Ziskind
LLMs with 8GB / 16GB

Can a modern LLM like llama 2 and llama 3 run on older MacBooks like MacBook Air M1, M2, and Intel Core i5? Sort of and i ...

11:09
LLMs with 8GB / 16GB

183,553 views

1 year ago

TechWithDavid
Raspberry Pi Ai Hat Setup & LLM Performance

Free Resume Template: https://techwithdavid.com/resume Our Community Newsletter: https://techwithdavid.com/newsletter ...

11:18
Raspberry Pi Ai Hat Setup & LLM Performance

49,289 views

1 year ago

Adam Lucek
What Do LLM Benchmarks Actually Tell Us? (+ How to Run Your Own)

Interpreting and running standardized language model benchmarks and evaluation datasets for both generalized and task ...

30:56
What Do LLM Benchmarks Actually Tell Us? (+ How to Run Your Own)

8,191 views

1 year ago