INTERVIEW: Scaling Democracy w/ (Dr.) Igor Krawczuk
The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophical toads… Need I say more?
If you’re interested in connecting with Igor, head on over to his website, or check out placeholder for thesis (it isn’t published yet).
Because these shownotes have a whopping 115 additional links below, I’ll highlight some that I think are particularly worthwhile:
- The best article you’ll ever read on Open Source AI
- The best article you’ll ever read on emergence in ML
- Kate Crawford’s Atlas of AI (Wikipedia)
- On the Measure of Intelligence
- Thomas Piketty’s Capital in the Twenty-First Century (Wikipedia)
- Yurii Nesterov’s Introductory Lectures on Convex Optimization
Chapters
00:02:32 - Introducing Igor
00:10:11 - Aside on EY, LW, EA, etc., a.k.a. lettersoup
00:18:30 - Igor on AI alignment
00:33:06 - “Open Source” in AI
00:41:20 - The story of infinite riches and suffering
00:59:11 - On AI threat models
01:09:25 - Representation in AI
01:15:00 - Hazard fishing
01:18:52 - Intelligence and eugenics
01:34:38 - Emergence
01:49:39 - Considering externalities
01:54:53 - The shape of an argument
02:02:59 - Eugenics
02:07:29 - I’m convinced, what now?
02:19:23 - AIxBio (round ??)
02:29:09 - On open release of models
02:41:48 - Data and copyright
02:45:29 - Scientific accessibility and bullshit
02:54:24 - Igor’s point of view
02:58:40 - Outro
Links
Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance. All references, including those only mentioned in the extended version of this episode, are included.
- LIONS Lab at EPFL
- The meme that Igor references
- On the Hardness of Learning Under Symmetries
- Course on the concept of equivariant deep learning
- Aside on EY/EA/etc.
- Sources on Eliezer Yudkowski
- Scholarly Community Encyclopedia
- TIME100 AI
- Yudkowski’s personal website
- EY Wikipedia
- A Very Literary Wiki -TIME article: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down documenting EY’s ruminations of bombing datacenters; this comes up later in the episode but is included here because it about EY.
- LessWrong
- MIRI
- Coverage on Nick Bostrom (being a racist)
- The Guardian article: ‘Eugenics on steroids’: the toxic and contested legacy of Oxford’s Future of Humanity Institute
- The Guardian article: Oxford shuts down institute run by Elon Musk-backed philosopher
- Investigative piece on Émile Torres
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜
- NY Times article: We Teach A.I. Systems Everything, Including Our Biases
- NY Times article: Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.
- Timnit Gebru’s Wikipedia
- The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence
- Sources on the environmental impact of LLMs
- Sources on Eliezer Yudkowski
- Filling Gaps in Trustworthy Development of AI (Igor is an author on this one)
- A Computational Turn in Policy Process Studies: Coevolving Network Dynamics of Policy Change
- The Smoothed Possibility of Social Choice, an intro in social choice theory and how it overlaps with ML
- Relating to Dan Hendrycks
- Natural Selection Favors AIs over Humans
- “One easy-to-digest source to highlight what he gets wrong [is] Social and Biopolitical Dimensions of Evolutionary Thinking” -Igor
- Introduction to AI Safety, Ethics, and Society, recently published textbook
- “Source to the section [of this paper] that makes Dan one of my favs from that crowd.” -Igor
- Twitter post referenced in the episode
- Natural Selection Favors AIs over Humans
- Goal Misgeneralization in Deep Reinforcement Learning
- The YouTube Radicalization Pipeline
- MIT Technology Review article: YouTube’s Algorithm Seems to be Funneling People to Alt-Right Videos
- Auditing Radicalization Pathways on YouTube
- SlateQ: A Tractable Decomposition for Reinforcement Learning with Recommendation Sets
- The best article you’ll ever read on Open Source AI
- Suspicious Machines Methodology, referred to as the “Rotterdam Lighthouse Report” in the episode
- Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
- AI Control: Improving Safety Despite Intentional Subversion
- Additional reading on mechanism design
- General: An Introduction to the Theory of Mechanism Design
- Relating to ML: Understanding Incentives: Mechanism Design Becomes Algorithm Design
- Example in ML: Steering No-Regret Learners to a Desired Equilibrium
- The more important Michael Jordan
- Pascal’s Mugging and Risk
- Pascal’s Mugging Wikipedia
- This is Financial Advice, a long video on the memestock phenomenon
- Slides on Skewness Preferences in Choice Under Risk
- Intelligence, eugenics, and toads
- NAFTA
- Lack of representation in AI
- The Guardian article: Google’s solution to accidental algorithmic racism: ban gorillas
- TIME article: Ethical AI Isn’t to Blame for Google’s Gemini Debacle
- TIME article: Google Pauses AI-Made Images of People After Race Inaccuracies
- The Guardian article: Google says sorry for racist auto-tag in photo app
- Emergence
- The best article you’ll ever read on emergence in ML
- Are Emergent Abilities in Large Language Models just In-Context Learning?
- Emergent Abilities of Large Language Models
- Are Emergent Abilities of Large Language Models a Mirage?
- Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
- CICERO paper: Human-level play in the game of Diplomacy by combining language models with strategic reasoning
- Kate Crawford’s Atlas of AI
- Malcolm Harris’ Palo Alto: A History of California, Capitalism, and the World
- Thomas Piketty’s Capital and Ideology
- Video debunking The Bell Curve
- Occam’s razor Wikipedia
- A Causal Framework for AI Regulation and Auditing
- A Simple Combinatorial Model of World Economic History
- Cambridge Analytica Wikipedia
- Previous interview w/ Igor and Carla Cremer
- Roko’s basilisk
- RationalWiki
- Specific source highlighted by Igor
- Counterpoint
- Point/Counterpoint
- RationalWiki
- Eugenics and bigots
- Scott Siskind
- Old Reddit discussion on Scott Siskind Emails
- Meditations on Moloch from Slate Star Codex, a.k.a. Scott Sickind’s old blog
- Discussion on Scott Siskind’s eugenecist rhetoric: awful.systems
- The Beigeness, or How to Kill People with Bad Writing: The Scott Alexander Method
- NY Times article: Silicon Valley’s Safe Space
- The Vulnerable World Hypothesis or “Nick Bostrom’s Black Ball paper”
- Definition of Ur-Fascism on Wikipedia
- Scott Siskind
- Video: The Future is a Dead Mall - Decentraland and the Metaverse
- Wired article: The Libertarian Logic of Peter Thiel
- Hegel
- If you assume 1 = 2, you can prove that I’m the Pope
- The Raft algorithm
- Video: The Alt-Right Playbook
- Igor’s resources short-list
- A Test of the Viable System Model: Theoretical Claim vs. Empirical Evidence
- Jane Jacobs’ The Death and Life of Great American Cities Wikipedia
- David Graeber’s Debt: The First 5000 Years Wikipedia
- Thomas Piketty’s Capital in the Twenty-First Century Wikipedia
- Yurii Nesterov’s Introductory Lectures on Convex Optimization
- Secure Socket Layer (SSL) Wikipedia
- AIxBio (most of these are repeat references from previous episodes)
- The Operational Risks of AI in Large-Scale Biological Attacks
- Building an early warning system for LLM-aided biological threat creation
- Can large language models democratize access to dual-use biotechnology?
- Will releasing the weights of future large language models grant widespread access to pandemic agents?
- Open-Sourcing Highly Capable Foundation Models
- Propaganda or Science: Open Source AI and Bioterrorism Risk
- Exaggerating the risks (Part 15: Biorisk from LLMs)
- On the Societal Impact of Open Foundation Models
- Tokyo subway sarin attack Wikipedia
- Generative AI Has a Visual Plagiarism Problem, a.k.a. “The Gary Marcus report”
- A follow-up from Gary Marcus: Things are about to get a lot worse for Generative AI
- Monte-Carlo Tree Search (MCTS) Wikipedia
- Source highlighted by Igor
- The Rhetoric of Economics
- Complexity Zoo
- nLab
Comments