Toptube Video Search Engine



Title:Mapping GPT revealed something strange...
Duration:01:09:14
Viewed:196,055
Published:23-05-2024
Source:Youtube

These two scientists have mapped out the insides or “reachable space” of a language model using control theory, what they discovered was extremely surprising. Please support us on Patreon to get access to the private Discord server, bi-weekly calls, early access and ad-free listening. https://patreon.com/mlst Aman Bhargava from Caltech and Cameron Witkowski from the University of Toronto to discuss their groundbreaking paper, “What’s the Magic Word? A Control Theory of LLM Prompting.” (the main theorem on self-attention controllability was developed in collaboration with Dr. Shi-Zhuo Looi from Caltech). They frame LLM systems as discrete stochastic dynamical systems. This means they look at LLMs in a structured way, similar to how we analyze control systems in engineering. They explore the “reachable set” of outputs for an LLM. Essentially, this is the range of possible outputs the model can generate from a given starting point when influenced by different prompts. The research highlights that prompt engineering, or optimizing the input tokens, can significantly influence LLM outputs. They show that even short prompts can drastically alter the likelihood of specific outputs. Aman and Cameron’s work might be a boon for understanding and improving LLMs. They suggest that a deeper exploration of control theory concepts could lead to more reliable and capable language models. We dropped an additional, more technical video on the research on our Twitter account here: https://x.com/MLStreetTalk/status/1795093759471890606 Pod version with no music/SFX: https://podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/Whats-the-Magic-Word--A-Control-Theory-of-LLM-Prompting-e2khs2t Additional 20 minutes of unreleased footage on our Patreon here: https://www.patreon.com/posts/whats-magic-word-104922629 What's the Magic Word? A Control Theory of LLM Prompting (Aman Bhargava, Cameron Witkowski, Manav Shah, Matt Thomson) https://arxiv.org/abs/2310.04444 LLM Control Theory Seminar (April 2024) https://www.youtube.com/watch?v=9QtS9sVBFM0 Society for the pursuit of AGI (Cameron founded it) https://agisociety.mydurable.com/ Roger Federer demo http://conway.languagegame.io/inference Neural Cellular Automata, Active Inference, and the Mystery of Biological Computation (Aman) https://aman-bhargava.com/ai/neuro/neuromorphic/2024/03/25/nca-do-active-inference.html Aman and Cameron also want to thank Dr. Shi-Zhuo Looi and Prof. Matt Thomson from from Caltech for help and advice on their research. (https://thomsonlab.caltech.edu/ and https://pma.caltech.edu/people/looi-shi-zhuo) https://x.com/ABhargava2000 https://x.com/witkowski_cam TOC: 00:00:00 - Main Intro 00:06:25 - Bios 00:07:50 - Control Theory and Governors 00:09:37 - LLM Control Theory 00:17:17 - Federer Game 00:19:49 - Building LLM Controllers 00:20:56 - Priors in LLMs 00:28:44 - Manipulating LLMs 00:34:11 - Adversarial Examples and Robustification 00:36:54 - Model vs Software 00:39:12 - Experiments in the Paper 00:44:36 - Language as an Interstate Freeway 00:46:41 - Collective Intelligence 00:58:54 - Biomimetic Intelligence 01:03:37 - Society for the Pursuit of AGI 01:05:47 - ICLR Rejection



SHARE TO YOUR FRIENDS


Download Server 1


DOWNLOAD MP4

Download Server 2


DOWNLOAD MP4

Alternative Download :