This site is a work in progress. Launched April 2026 — expect changes.

Blog & Podcast

Blog: aaronbergman.net  ·  Podcast: aaronbergman.net/podcast

Fetched from RSS. Updates hourly.

Blog Posts

#15: Robi Rahman and Aaron tackle donation diversification, decision procedures under moral uncertainty, and other spicy topics
In response to the previous episode, Vegan Hot Ones
Vegan Hot Ones | EA Twitter Fundraiser 2024
Featuring Max Alexander and Robi Rahman (and not Aaron)
Public intellectuals need to say what they actually believe
Intro This Twitter thread from Kelsey Piper has been reverberating around my psyche since its inception, almost six years now.
Post readout: Utilitarians Should Accept that Some Suffering Cannot be “Offset”
This is an audio readout of my recent post Utilitarians Should Accept that Some Suffering Cannot be “Offset”, also on the EA Forum
Utilitarians Should Accept that Some Suffering Cannot be “Offset”
Note: see further discussion on the EA Forum
Clarifying some points on "Suffering-focused total utilitarianism"
I'm still right
#14: Jesse Smith on HVAC, indoor air quality, and generally being an extremely based person
An actual adult for once
Preparing for the Intelligence Explosion (paper readout and commentary)
In which I read and then briefly discuss a paper by Fin Moorhouse & Will MacAskill
#13: Max Alexander and I debate whether total utilitarianism implies the very repugnant conclusion
(Pigeon Hour x Consistently Candid Crossover)
#12: Arthur Wright and I discuss whether the Givewell suite of charities are really the best way of helping humans alive today, the value of reading old books, rock climbing, and more
Please follow Arthur on Twitter and check out his blog!
Drunk Pigeon Hour!
You earned it
Best of Pigeon Hour
A highlight from each episode 😊
#10: Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism* with Max Alexander and Sarah Hastings-Woodhouse
*Or something like that
#9: Sarah Woodhouse on discovering AI x-risk, Twitter, and more
Note: I can’t seem to edit or remove the “transcript” tab.
#8: Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all
Episode #8 of Pigeon Hour
#7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can
Listen on Spotify or Apple Podcasts
#6 Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense)
This wide-ranging conversation between Daniel and Aaron touches on movies, business drama, philosophy of language, ethics and legal theory. The two debate major ethical concepts like utilitarianism...
I regret to report that I've started a podcast (again)
Tl;dr and links
#5: Nathan Barnard (again!) on why general intelligence is basically fake
Follow Nathan’s blog
#4 Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more
Note: skip to minute 4 if you’re already familiar with The EA Archive or would just rather not listen to my spiel

Podcast Episodes

#15: Robi Rahman and Aaron tackle donation diversification, decision procedures under moral uncertainty, and other spicy topics
SummaryIn this episode, Aaron and Robi reunite to dissect the nuances of effective charitable giving. The central debate revolves around a common intuition: should a donor diversify their...
Vegan Hot Ones | EA Twitter Fundraiser 2024
A great discussion between my two friends Max Alexander of Scouting Ahead and Robi Rahman (in response to a fundraiser that we wrapped up more than 13 months ago)Tweet with context:...
Post readout: Utilitarians Should Accept that Some Suffering Cannot be “Offset”
This is an audio readout of my recent post Utilitarians Should Accept that Some Suffering Cannot be “Offset”, also on the EA ForumEnjoy! Get full access to Aaron's Blog at...
#14: Jesse Smith on HVAC, indoor air quality, and generally being an extremely based person
SummaryJoin host Aaron with Jesse Smith, a self-described "unconventional EA" (Effective Altruist) who bridges blue-collar expertise with intellectual insight. Jesse recounts his wild early...
Preparing for the Intelligence Explosion (paper readout and commentary)
Preparing for the Intelligence Explosion is a recent paper by Fin Moorhouse and Will MacAskill.* 00:00 - 1:58:04 is me reading the paper.* 1:58:05 - 2:26:06 is a string of random thoughts I have...
#13: Max Alexander and I debate whether total utilitarianism implies the very repugnant conclusion
The gang from Episode 10 is back, with yet another Consistently Candid x Pigeon Hour crossoverAs Sarah from Consistently Candid describes:In this episode, Aaron Bergman and Max Alexander are back to...
#12: Arthur Wright and I discuss whether the Givewell suite of charities are really the best way of helping humans alive today, the value of reading old books, rock climbing, and more
Please follow Arthur on Twitter and check out his blog! Thank you for just summarizing my point in like 1% of the words-Aaron, to Arthur, circa 34:45Summary(Written by Claude Opus aka Clong)* Aaron...
Drunk Pigeon Hour!
IntroAround New Years, Max Alexander, Laura Duffy, Matt and I tried to raise money for animal welfare (more specifically, the EA Animal Welfare Fund) on Twitter. We put out a list of incentives (see...
Best of Pigeon Hour
Table of contentsNote: links take you to the corresponding section below; links to the original episode can be found there.* Laura Duffy solves housing, ethics, and more [00:01:16]* Arjun Panickssery...
#10: Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism* with Max Alexander and Sarah Hastings-Woodhouse
IntroAt the gracious invitation of AI Safety Twitter-fluencer Sarah Hastings-Woodhouse, I appeared on the very first episode of her new podcast “Consistently Candid” to debate moral realism (or...
#9: Sarah Woodhouse on discovering AI x-risk, Twitter, and more
Note: I can’t seem to edit or remove the “transcript” tab. I recommend you ignore that and just look at the much higher quality, slightly cleaned up one below. Most importantly, follow Sarah on...
#8: Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all
* Follow ⁠Max on Twitter⁠* And read his ⁠blog⁠* Listen here or on Spotify or Apple Podcasts * RIP Google Podcasts 🪦🪦🪦SummaryIn this philosophical and reflective episode, hosts Aaron and Max engage in...
#7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can
* Listen on Spotify or Apple Podcasts* Be sure to check out and follow Holly’s Substack and org Pause AI. Blurb and summary from ClongBlurbHolly and Aaron had a wide-ranging discussion touching on...
#6 Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense)
Listen on: * Spotify* Apple Podcasts* Google PodcastsNote: the core discussion on ethics begins at 7:58 and moves into philosophy of language at ~1:12:19Daniel’s stuff:* AI X-risk podcast * The Filan...
#5: Nathan Barnard (again!) on why general intelligence is basically fake
Very imperfect transcript: bit.ly/3QhFgEJSummary from Clong: The discussion centers around the concept of a unitary general intelligence or cognitive ability. Whether this exists as a real and...
#4 Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more
Summary (by Claude.ai)This informal podcast covers a wide-ranging conversation between two speakers aligned in the effective altruism (EA) community. They have a similar background coming to EA from...
#3: Nathan Barnard on how financial regulation can inform AI regulation
Summary/specific topics:- Stress Tests and AI Regulation: Nathan elaborates on the concept of stress tests conducted by central banks. These tests assess the resilience of banks to severe economic...
#2 Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one
- Follow Arjun on Twitter: https://twitter.com/panickssery- Read and subscribe to his blog: https://arjunpanickssery.substack.com- A mediocre transcription can be found at...
#1 Laura Duffy solves housing, ethics, and more
A transcript can be found at assemblyai.com/playground/transcript/6y7e7wz28c-30aa-4e83-ba4f-1bddf2e23dad Get full access to Aaron's Blog at www.aaronbergman.net/subscribe
Aaron's Blog, podcast edition
For All Good's inaugural episode, we talked to Rob Wiblin and Keiran Harris of 80,000 Hours about how and why they produce their show. This episode first appeared on their new feed "80,000 Hours:...
Aaron Bergman on the Narratives Podcast: EA, career, and more
Hear EA Georgetown member Aaron Bergman's recent interview as a guest on the Narratives Podcast! During the show, host Will Jarvis talks to Aaron about a key way he thinks people go wrong when...
Introducing All Good
Welcome to All Good, a show by Georgetown Effective Altruism. Get full access to Aaron's Blog at www.aaronbergman.net/subscribe
Kids are people too
Update Feb 12, 2023: Automated audio experiment with the Automator app no one usesIntroBack in 2016, I got my first job as a summer camp counselor. It was an outdoor adventure day camp, to which the...