INTERVIEW: Scaling Democracy w/ (Dr.) Igor Krawczuk
The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophi...
The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophi...
As always, the best things come in 3s: dimensions, musketeers, pyramids, and… 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Po...
Join me for round 2 with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. Dr. Park was a cofounder of StakeO...
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he...
Take a trip with me through the paper Large Language Models, A Survey, published on February 9th of 2024. All figures and tables mentioned throughout the epi...
Before I begin with the paper-distillation based minisodes, I figured we would go over best practices for reading research papers. I go through the anatomy o...
I provide my thoughts and recommendations regarding personal professional portfolios.
In this minisode I give some tips for staying up-to-date in the everchanging landscape of AI. I would like to point out that I am constantly iterating on the...
Alice Rigg, a mechanistic interpretability researcher from Ottawa, Canada, joins me to discuss their path and the applications process for research/mentorshi...
As always, the best things come in 3s: dimensions, musketeers, pyramids, and… 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Po...
Join me for round 2 with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. Dr. Park was a cofounder of StakeO...
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he...
In this episode I discuss my initial research proposal for the 2024 Winter AI Safety Camp with one of the individuals who helps facilitate the program, Remme...
I provide my thoughts and recommendations regarding personal professional portfolios.
A summary and reflections on the path I have taken to get this podcast started, including some resources recommendations for others who want to do something ...
In this minisode I give some tips for staying up-to-date in the everchanging landscape of AI. I would like to point out that I am constantly iterating on the...
We’re back after a month-long hiatus with a podcast refactor and advice on the applications process for research/mentorship programs.
Esben reviews an application that I would soon submit for Open Philanthropy’s Career Transitition Funding opportunity. Although I didn’t end up receiving the...
Alice Rigg, a mechanistic interpretability researcher from Ottawa, Canada, joins me to discuss their path and the applications process for research/mentorshi...
We’re back after a month-long hiatus with a podcast refactor and advice on the applications process for research/mentorship programs.
Esben reviews an application that I would soon submit for Open Philanthropy’s Career Transitition Funding opportunity. Although I didn’t end up receiving the...
Join our hackathon group for the second episode in the Evals November 2023 Hackathon subseries. In this episode, we solidify our goals for the hackathon afte...
This episode kicks off our first subseries, which will consist of recordings taken during my team’s meetings for the AlignmentJams Evals Hackathon in Novembe...
As always, the best things come in 3s: dimensions, musketeers, pyramids, and… 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Po...
Join me for round 2 with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. Dr. Park was a cofounder of StakeO...
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he...
We’re back after a month-long hiatus with a podcast refactor and advice on the applications process for research/mentorship programs.
This episode is a brief overview of the major takeaways I had from attending EAG Boston 2023, and an update on my plans for the podcast moving forward.
Darryl and I discuss his background, how he became interested in machine learning, and a project we are currently working on investigating the penalization o...
Alice Rigg, a mechanistic interpretability researcher from Ottawa, Canada, joins me to discuss their path and the applications process for research/mentorshi...
Join our hackathon group for the second episode in the Evals November 2023 Hackathon subseries. In this episode, we solidify our goals for the hackathon afte...
This episode kicks off our first subseries, which will consist of recordings taken during my team’s meetings for the AlignmentJams Evals Hackathon in Novembe...
Welcome to the Into AI Safety podcast! In this episode I provide reasoning for why I am starting this podcast, what I am trying to accomplish with it, and a ...
In this episode I discuss my initial research proposal for the 2024 Winter AI Safety Camp with one of the individuals who helps facilitate the program, Remme...
This episode is a brief overview of the major takeaways I had from attending EAG Boston 2023, and an update on my plans for the podcast moving forward.
This episode is a brief overview of the major takeaways I had from attending EAG Boston 2023, and an update on my plans for the podcast moving forward.
We’re back after a month-long hiatus with a podcast refactor and advice on the applications process for research/mentorship programs.
Alice Rigg, a mechanistic interpretability researcher from Ottawa, Canada, joins me to discuss their path and the applications process for research/mentorshi...
In this minisode I give some tips for staying up-to-date in the everchanging landscape of AI. I would like to point out that I am constantly iterating on the...
In this minisode I give some tips for staying up-to-date in the everchanging landscape of AI. I would like to point out that I am constantly iterating on the...
This episode kicks off our first subseries, which will consist of recordings taken during my team’s meetings for the AlignmentJams Evals Hackathon in Novembe...
This episode kicks off our first subseries, which will consist of recordings taken during my team’s meetings for the AlignmentJams Evals Hackathon in Novembe...
A summary and reflections on the path I have taken to get this podcast started, including some resources recommendations for others who want to do something ...
A summary and reflections on the path I have taken to get this podcast started, including some resources recommendations for others who want to do something ...
Darryl and I discuss his background, how he became interested in machine learning, and a project we are currently working on investigating the penalization o...
Darryl and I discuss his background, how he became interested in machine learning, and a project we are currently working on investigating the penalization o...
I provide my thoughts and recommendations regarding personal professional portfolios.
Join our hackathon group for the second episode in the Evals November 2023 Hackathon subseries. In this episode, we solidify our goals for the hackathon afte...
Before I begin with the paper-distillation based minisodes, I figured we would go over best practices for reading research papers. I go through the anatomy o...
Esben reviews an application that I would soon submit for Open Philanthropy’s Career Transitition Funding opportunity. Although I didn’t end up receiving the...
Esben reviews an application that I would soon submit for Open Philanthropy’s Career Transitition Funding opportunity. Although I didn’t end up receiving the...
Take a trip with me through the paper Large Language Models, A Survey, published on February 9th of 2024. All figures and tables mentioned throughout the epi...
Take a trip with me through the paper Large Language Models, A Survey, published on February 9th of 2024. All figures and tables mentioned throughout the epi...
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he...
After getting some advice and reflecting more on my own personal goals, I have decided to shift the direction of the podcast towards accessible content regar...
Join me for round 2 with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. Dr. Park was a cofounder of StakeO...
As always, the best things come in 3s: dimensions, musketeers, pyramids, and… 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Po...
The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophi...