Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • How Cleaner Salt Production in Tanga Is Improving Nutrition Outcomes
    How Cleaner Salt Production in Tanga Is Improving Nutrition Outcomes dwaweru Wed, 05/06/2026 - 10:09 When you ask families in Tanga what salt means to them, the answer is often simple: “It’s something we cook with every day.” Yet few realise that the quality of that salt; its purity, safety, and level of iodization; directly affects the health of households, particularly children and pregnant...
    Global Alliance for Improved Nutrition | 2 hours ago
  • The end of the fallout bunker
    In Germany, at least
    Existential Crunch | 3 hours ago
  • Your rights when flying to Europe
    Europe (and the UK) have strong protections for flyers in the case of delayed or cancelled flights. However very few people are aware of these, and airlines will almost always try to wriggle out of paying up. Even travel agents are often unaware of these laws, or unwilling to fight the airline for you.
    LessWrong | 6 hours ago
  • iOS 3rd party AI 🤖, OpenAI phone 2027 📱, compounding AI work 📈
    TLDR AI | 11 hours ago
  • Rust in Numbers
    Why do manure spreaders have life cycles?.
    Asterisk | 11 hours ago
  • Model Spec Midtraining: Improving How Alignment Training Generalizes
    tl;dr We introduce model spec midtraining (MSM): after pre-training but before alignment fine-tuning, we train models on synthetic documents discussing their Model Spec, teaching them how they should behave and why. This controls how models generalize from subsequent alignment training—for example, two models with identical fine-tuning can generalize to different values depending on how MSM...
    LessWrong | 11 hours ago
  • Is AI really a bubble?
    "A technology can be a bubble and still be real. The dot-com bubble was a bubble. The internet was real.". In 2021, experts predicted AI would win a Math Olympiad gold medal in 22 years. It happened last year. A few weeks ago, GPT 5.2 published a novel result in physics. Now the AI companies are openly working on AIs that build smarter AIs that build smarter AIs.
    Machine Intelligence Research Institute | 12 hours ago
  • May is Healthy Vision Month. May 10 is Mother’s Day. This is what it looks like to protect a child’s future.
    If you have children, or have ever been around a one-year-old, you know they are into everything. It is the age of eager discovery; of reaching, crawling, and finally finding your feet. Sahil is no different. He has that same drive to explore, but for the first year of his life, he just couldn’t see …. The post May is Healthy Vision Month. May 10 is Mother’s Day.
    Seva Foundation | 12 hours ago
  • A Pro-Supply To-Do List for Congress’s Housing Bill
    America can’t afford a lowest-common-denominator housing supply bill
    Institute for Progress | 13 hours ago
  • RIP Classic Reasoning Benchmarks. What’s Next?
    Give up at least one of: text only, short time horizon, easy to grade, and expert human superiority.
    Epoch Newsletter | 14 hours ago
  • What holds AI safety together? Co-authorship networks from 200 papers
    We (social science PhD students) computed co-authorship networks based on a corpus of 200 AI safety papers covering 2015-2025, and we’d like your help checking if the underlying dataset is right. Co-authorship networks make visible the relative prominence of entities involved in AI safety research, and trace relationships between them.
    LessWrong | 15 hours ago
  • Let Kids Keep More Productivity Gains
    While I was traveling Julia asked me: why is Anna saying her fiddle practice is only two minutes? In this case, two minutes was the right amount of time! . Anna (10y) and I had been fighting a lot about practice. She'd complain, slump, stop repeatedly to make adjustments, and generally be miserable.
    LessWrong | 15 hours ago
  • Does your AI perform badly because you — you, specifically — are a bad person
    Claude really got me lately. I’d given it an elaborate prompt in an attempt to summon an AGI-level answer to my third-grade level question. Embarrassingly, it included the phrase, “this work might be reviewed by probability theorists, who are very pedantic”. Claude didn’t miss a beat.
    LessWrong | 15 hours ago
  • AI risk was not invented by AI CEOs to hype their companies
    I hear that many people believe that the idea of advanced AI threatening human existence was invented by AI CEOs to hype their products. I’ve even been condescendingly informed of this, as if I am the one at risk of naively accepting AI companies’ preferred narratives. If you are reading this, you are probably familiar enough with the decades-old AI safety community to know this isn’t true.
    LessWrong | 15 hours ago
  • [Linkpost] Interpreting Language Model Parameters
    This is the latest work in our Parameter Decomposition agenda. We introduce a new parameter decomposition method, adVersarial Parameter Decomposition (VPD) and decompose the parameters of a small language model with it. VPD greatly improves on our previous techniques, Stochastic Parameter Decomposition (SPD) and Attribution-based Parameter Decomposition (APD).
    LessWrong | 16 hours ago
  • [Linkpost] Interpreting Language Model Parameters
    This is the latest work in our Parameter Decomposition agenda. We introduce a new parameter decomposition method, adVersarial Parameter Decomposition (VPD) and decompose the parameters of a small language model with it. VPD greatly improves on our previous techniques, Stochastic Parameter Decomposition (SPD) and Attribution-based Parameter Decomposition (APD).
    AI Alignment Forum | 17 hours ago
  • New Book — Compassionate Purpose: Personal Inspiration for a Better World
    “How are we to live, in a world in which there is so much unnecessary suffering? Magnus Vinding looks unflinchingly at that question, and gives an answer that is realistic, and yet inspiring. Read this book. It may change your life.”. — Peter Singer, author of Animal Liberation. I have just published a book:
    Effective Altruism Forum | 17 hours ago
  • Motivated reasoning, confirmation bias, and AI risk theory
    Of the fifty-odd biases discovered by Kahneman, Tversky, and their successors, forty-nine are cute quirks, and one is destroying civilization. This last one is confirmation bias. - From Scott Alexander's review of Julia Galef's The Scout Mindset.
    AI Alignment Forum | 19 hours ago
  • What’s new in biology: May 2026
    A cure for congenital deafness, recreating snake venom, antibodies, a legend in cardiovascular medicine, and a successful hair loss treatment?
    The Works in Progress Newsletter | 19 hours ago
  • The Best Argument Against Deontology Is About Suitcases
    Pack it up deontologists!
    Bentham's Newsletter | 19 hours ago
  • What Tourists Will (And Won’t) Pay For Whale Watching
    A study of blue whale watchers in Mexico finds that boat crowding more than whale numbers shapes what tourists are willing to pay, with implications for animal welfare, local economies, and conservation. A study of blue whale watchers in Mexico finds that boat crowding more than whale numbers shapes what tourists are willing to pay, with implications for animal welfare, local economies, and...
    Faunalytics | 20 hours ago
  • Business Operations Associate / Specialist
    The post Business Operations Associate / Specialist appeared first on 80,000 Hours.
    80,000 Hours | 20 hours ago
  • People Operations Associate / Specialist
    The post People Operations Associate / Specialist appeared first on 80,000 Hours.
    80,000 Hours | 20 hours ago
  • Recruiting Associate / Specialist
    The post Recruiting Associate / Specialist appeared first on 80,000 Hours.
    80,000 Hours | 20 hours ago
  • A Guidebook on “Whole-of-State Cybersecurity”
    A new guidebook released by CLTC’s Public Interest Cybersecurity Program highlights the benefits of “community cyber defense programs” — including cybersecurity clinics, regional security operation centers (RSOCs), and state cyber corps — as a resource for defending organizations like nonprofits, rural hospitals, schools, local utilities, counties, municipalities, and small businesses from...
    Center for Long-Term Cybersecurity | 22 hours ago
  • Zach Robinson | Effective Altruism Stories
    “Why is it that we set lines in these seemingly arbitrary places based on what people believe or what they look like or how close they are to us?". Zach Robinson couldn't answer that question growing up in Omaha, where he had a different race, religion, and sexual orientation than most of his community.
    Centre for Effective Altruism | 22 hours ago
  • Your Grants Are the Floor, Not the Ceiling
    The more I have been working with grantmakers the more I have come to value the impact they can have outside of their direct grantmaking.
    Measured Life | 22 hours ago
  • Would You Like To Give Me Money?
    if so, what for?
    Atoms vs Bits | 23 hours ago
  • Compassionate Purpose: Personal Inspiration for a Better World
    “Read this book. It may change your life.”— Peter Singer, author of Animal Liberation What if the point of self-improvement were not just to feel better or get ahead, but to become more capable of helping in a hurting world? In Compassionate Purpose, Magnus Vinding bridges self-help and ethics with a framework for personal development... Continue Reading →...
    Magnus Vinding | 1 days ago
  • Krystal Birungi is awarded the Global Citizen Prize
    Subtitle as H4 It is with great pleasure that I announce that my colleague Krystal Birungi of Target Malaria Uganda at the Uganda Virus Research Institute has been awarded the 2026 Global Citizen Prize. The Global Citizen Prize seeks to identify and celebrate grassroots activists in local communities who are fighting for social justice, championing […].
    Target Malaria | 1 days ago
  • Open position: Advisor
    The post Open position: Advisor appeared first on 80,000 Hours.
    80,000 Hours | 1 days ago
  • How much electricity does AI consume? [2025 summary]
    What share of electricity is consumed by data centres? What's the energy footprint of ChatGPT and other chatbots?
    Sustainability by Numbers | 1 days ago
  • Post-Inkhaven Post
    (Skip if you aren’t interested in Inkhaven chatter). Watteau, The Embarkation for Cythera (1717) Inkhaven is over. I am sad. Surprisingly: I didn’t actually find hitting the writing that hard. Lots of people, maybe the majority of the cohort, were working right up to midnight, and didn’t have and extra posts in the bank for days when they didn’t want to write. It’s quite possible that...
    Henry Stanley | 1 days ago
  • OpenAI exec finances 💰, Amazon supply chain services 🚚, Redis arrays 🧑‍💻
    TLDR AI | 1 days ago
  • Goodmaxxing
    (Crosspost). If you’re young and online, you’re probably maxxing something. Maybe you’re looksmaxxing: trying to maximize your hotness (e.g. by hitting yourself in the face with a hammer). Maybe, like Clavicular, you do it just to mog other people—to look better than they do. But good looks reach diminishing marginal returns.
    Effective Altruism Forum | 1 days ago
  • May is Healthy Vision Month. This is how sight united three generations of women.
    In Battambang, Cambodia, three generations of women run a family car wash. It’s a life built on grit, love, and long days, but for 20 years, there was a missing piece at the center of their home. At 74, Phen Mao lived in a blur. After she lost her sight, her daughter, Lorb, carried the …. The post May is Healthy Vision Month. This is how sight united three generations of women.
    Seva Foundation | 1 days ago
  • It's nice of you to worry about me, but I really do have a life
    I have two shameful secrets that I probably shouldn't talk about online: I love my family. I enjoy my hobbies. "What an idiot!" you probably think. "Doesn't he realize that at his next job interview, HR will probably use an AI that can match his online writing based on a short sample of written text, and when they ask 'hey AI, is this guy really 100% devoted to his job, and does he spend...
    LessWrong | 1 days ago
  • Irretrievability; or, Murphy's Curse of Oneshotness upon ASI
    Example 1: The Viking 1 lander. In the 1970s, NASA sent a pair of probes to Mars, the Viking 1 and Viking 2 missions. Total cost of $1B (1970), equivalent to about $7B (2025). The Viking 1 probe operated on Mars's surface for six years, before its battery began to seriously degrade. One might have thought a battery problem like that would spell the irrevocable end of the mission.
    LessWrong | 2 days ago
  • AIM's new charity taxonomy
    0. I don't work at AIM. why care about this?. This taxonomy is written from AIM's perspective, but it may be helpful more broadly: If you're starting a new charity, incubating others, or doing charity idea research: The taxonomy gives you a structured way to think about which ideas to pursue, what founder profile fits, and what research and support each idea needs.
    Effective Altruism Forum | 2 days ago
  • 🟡 Iran says it will target US naval vessels, UAE leaves OPEC, GPT-5.5 similar to Mythos on cyber tasks || Global Risks Weekly Roundup #18/2026
    Executive summary
    Sentinel | 2 days ago
  • “If You Do One Thing for Animals This Year, Do This” by Becca Rogers
    There is a short window to prevent a US bill that would overturn decades of animal welfare progress. This is arguably the most consequential piece of farm animal legislation in U.S. history. Summary. The Farm Bill currently being considered by the U.S. Congress includes the “Save Our Bacon Act”, which would eliminate states' abilities to set standards on how farmed animals are raised and...
    Effective Altruism Forum Podcast | 2 days ago
  • AI Industrial Takeoff — Part 1: Maximum growth rates with current technology
    How fast could an AI-driven economy grow? Most economists expect a few percentage points at best, comparable to previous general-purpose technologies (Acemoglu (2024)). Those closer to AI development tend to imagine something much more radical (Shulman (2023); Davidson and Hadshar (2025)). This series aims to ground growth rates in how physical production works.
    LessWrong | 2 days ago
  • Goodmaxxing
    A letter to Gen Alpha
    Bentham's Newsletter | 2 days ago
  • AI Industrial Takeoff — Part 1: Maximum growth rates with current technology
    How fast can an AI-driven economy grow? Economists expect a few percentage points; at best those closer to AI development imagine Dyson spheres within years. Who is correct?
    Defenses in Depth | 2 days ago
  • How the Radical Fund Sustained Radical Imagination
    Editors’ Note: Carmen Rojas continues HistPhil’s book forum on John Witt’s The Radical Fund (Simon & Schuster, 2025). John Fabian Witt’s The Radical Fund: How a Band of Visionaries and a Million Dollars Upended America is one of the best books I’ve read about the perils and promises of philanthropy in the United States. It … Continue reading →...
    HistPhil | 2 days ago
  • Taking woo seriously but not literally
    I think that a lot of “woo” - a broad term that includes things like chakras, energy healing, Tarot, various Eastern religions and neopagan practices, etc. - consists of things that have real effects and uses, even if many (though not all) of their practitioners are mistaken about the exact mechanisms and make unwarranted metaphysical claims. Now, a woo practitioner might explain what’s...
    LessWrong | 2 days ago
  • Open Thread 432
    Astral Codex Ten | 2 days ago
  • Import AI 455: AI systems are about to start building themselves.
    The first step towards recursive self improvement
    Import AI | 2 days ago
  • Linkpost for May
    Effective Altruism
    Thing of Things | 2 days ago
  • Van Halen Of The Heart
    just show you care
    Atoms vs Bits | 2 days ago
  • ChinAI #357: AI Surveillance in Chinese Universities
    Greetings from a world where…...
    ChinAI Newsletter | 2 days ago
  • Podcast | How will falling fertility rates hurt the economy? With Melissa Kearney
    Podcast | How will falling fertility rates hurt the economy? With Melissa Kearney Typically, a society’s population remains stable if women have about 2.1 children each. By that metric, the word has a big problem. In developed countries the total fertility rate is well below that figure. So what are the economic consequences of that shortfall?
    J-PAL | 2 days ago
  • Can putting a price tag on ending poverty unlock billions in giving?
    Can putting a price tag on ending poverty unlock billions in giving? New research from J-PAL affiliate Paul Niehaus, cofounder of GiveDirectly, reveals ending extreme poverty may be more achievable than many assume. The question now is whether that kind of clarity can mobilize philanthropic money sitting on the sidelines... . spriyabalasubr… Mon, 05/04/2026 - 07:26...
    J-PAL | 2 days ago
  • Podcast | Boosting farmers' profits
    Podcast | Boosting farmers' profits In this episode of VoxDevTalks, Craig McIntosh discusses a recent J-PAL policy insight that takes stock of the evidence from randomised controlled trials on credit, subsidies, and cash transfers for smallholder farmers, arriving at conclusions that challenge some of agriculture's most persistent development assumptions. spriyabalasubr… Mon, 05/04/2026 -...
    J-PAL | 2 days ago
  • The EU AI Act Newsletter #101: Trilogue Breakdown
    Talks on delaying the AI Act collapse over industrial AI, Merz diverges from his coalition partner, and Parliament invites Anthropic to a hearing on the Mythos model.
    The EU AI Act Newsletter | 2 days ago
  • Wat is een walvis waard en wat is het gewicht van een garnaal op de morele weegschaal?
    Opiniestuk in De Standaard (04-05-2026) Eén geredde bultrug toont hoe onbetrouwbaar onze morele radar is Het was ontroerend mooi om te zien: bultrugwalvis Timmy die na een reddingsoperatie terug vrij rondzwom in de open zee, nadat hij eind maart in … Lees verder →...
    The Rational Ethicist | 2 days ago
  • Thoughts on investing for transformative AI
    TLDR: I basically don’t. Contents. Contents. Ethical concerns. Thoughts on how to avoid becoming corrupted. Future worlds What happens in the lead-up to ASI?. Predictions are hard, especially about markets. Trend-following. The EA portfolio. Leaning my investments in the right direction. Appendix: Some specific predictions. Notes. Ethical concerns.
    Philosophical Multicore | 2 days ago
  • What I learned from making a fire
    One time my friends and I made a fire on the beach.
    Hauke’s Blog | 2 days ago
  • Meta humanoid robots 🤖, SpaceX costs leak 💰, open design 🧑‍🎨
    TLDR AI | 2 days ago
  • Dairy cows make their misery expensive (but their calves can’t)
    How much do cows suffer in the production of milk? I can’t answer that; understanding animal experience is hard. But I can at least provide some facts about the conditions dairy cows live in, which might be useful to you in making your own assessment. My biggest conclusion is that cows made better choices than chickens by making their misery financially costly to farmers. Life Cycle.
    LessWrong | 3 days ago
  • Exploration Hacking: Can LLMs Learn to Resist RL Training?
    We empirically investigate exploration hacking (EH) — where models strategically alter their exploration to resist RL training — by creating model organisms that resist capability elicitation, evaluating countermeasures, and auditing frontier models for their propensity.
    AI Alignment Forum | 3 days ago
  • Explicit Racial Discrimination in College Debate
    Plus other madness
    Bentham's Newsletter | 3 days ago
  • Word-learner
    Words, words were truly alive on the tongue, in the head
    Atoms vs Bits | 3 days ago
  • Are the last 3 months the start of an AI acceleration?
    Most public commentary is debating whether AI has hit a plateau.
    Benjamin Todd | 3 days ago
  • The better algorithms of our nature
    Engagement, bridging, and the design of digital platforms which don't pander to our weaknesses.
    Reasonable People | 3 days ago
  • AI, Fiction, Literature: A Scenario
    Soon, if not already, established authors of mass-market fiction will publish AI-assisted writing.
    Raising Dust | 3 days ago
  • What’s more likely to be sentient: an ant or ChatGPT?
    Sentience is hot these days. Partly because of the development of impressive new AI systems, everyone seems to be asking: How do we know if something is sentient? While consciousness means simply having a subjective point of view on the world — a feeling of what it’s like to be you — sentience is the […]...
    Future Perfect | 3 days ago
  • Measuring the ability of Opus 4.5 to fool narrow classifiers
    We measure the ability of Opus 4.5 to fool prompted or fine-tuned classifiers trying to detect a narrow set of outcomes. We find that: The Opus 4.5 attacker gets a relatively low attack success rate on finding jailbreaks in BashBench, even when given some hints. Performance is especially low against a prompt Opus 4.5 classifier with a CoT and a fine-tuned Haiku 4.5 classifier.
    LessWrong | 3 days ago
  • Notes on equanimity from the inside
    I've always thought of myself as even-keeled and equanimous; that my mind is still. In hindsight, I had no idea what I was talking about. Halfway through my second ten-day meditation retreat, I experienced a depth of equanimity that broke my existing frame of reference. It’s hard to convey in words.
    Effective Altruism Forum | 3 days ago
  • A new rationalist self-improvement book: the 12 Levers
    I'm publishing a book that I think can fairly be described as a rationalist approach to self-improvement. Whereas many self-help books focus mainly on stories and what worked well for the author, our book takes a very different approach.
    LessWrong | 3 days ago
  • OpenAI's red line for AI self-improvement is fundamentally flawed
    TL;DR. OpenAI's "Critical" threshold for AI self-improvement in the Preparedness Framework v2 has three structural problems: It fires too late. The lagging indicator, 5× generational acceleration sustained for several months, lets ~3 years of effective progress accumulate before triggering. Anthropic used a 2x threshold instead of a 5x. It's self-certified.
    LessWrong | 3 days ago
  • A new rationalist self-improvement book: the 12 Levers
    I'm publishing a book that I think can fairly be described as a rationalist/evidence oriented approach to self-improvement. Whereas many self-help books focus mainly on stories and what worked well for the author, our book takes a very different approach.
    Effective Altruism Forum | 3 days ago
  • Open position: Web Product Lead
    The post Open position: Web Product Lead appeared first on 80,000 Hours.
    80,000 Hours | 3 days ago
  • Open position: Product and Growth Manager
    The post Open position: Product and Growth Manager appeared first on 80,000 Hours.
    80,000 Hours | 3 days ago
  • You Are Not Immune To Mode Collapse
    “Mode collapse” is a few things. First it was an observation about how early image generating AIs often collapsed to producing just the modal output from their training distribution (something very common, like a house with a white picket fence and a tree in the garden). Then it was the observation that this effect seemed to occur extremely quickly when AIs were trained on AI-generated inputs.
    LessWrong | 4 days ago
  • What it means for an AI to “want” something?
    What it means for an AI to “want” something? MIRI President Nate Soares breaks it down with @novaramedia: AI systems are going to do something that’s to “wanting” what submarine movement is to swimming. Not human, but functionally the same outcome.
    Machine Intelligence Research Institute | 4 days ago
  • The seven deadly curses of superhuman AI
    Developing a superintelligent AI that does what we want, without killing everyone, might be extremely difficult. In this video, we showcase the arguments from Chapter 10 of the book "If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky and Nate Soares. The chapter draws on analogies with space probes, nuclear reactors, and computer security.
    Rational Animations | 4 days ago
  • Billie Eilish Is Obviously Right About Animals
    Eating someone is inconsistent with loving them properly
    Bentham's Newsletter | 4 days ago
  • Eternal Recurrence*
    In an earlier post—...
    Fake Nous | 4 days ago
  • Games that change your mind
    Some things you might learn from games are pretty blatant: Trivial Pursuit might teach you trivia, MasterType might teach you about typing, Grand Theft Auto might teach you about driving or crime. But sometimes games teach people less obvious things—things that are more experiential or ineffable, things that you didn’t know you didn’t know, concepts that stick in your mind, deep things.
    LessWrong | 4 days ago
  • Some deaf children are hearing again because of a new gene therapy
    In a lab room, a toddler, deaf from birth, sits while a tone plays. There’s no reaction. His face does not change. Six weeks later, after a single injection of an experimental gene therapy, the same toddler is back in the same room. The tone plays. The toddler’s head turns toward the sound. And somewhere […]...
    Future Perfect | 4 days ago
  • Primary Care Physicians are Incompetent. We Need More of Them.
    The typical primary care physician is incompetent in every measurable respect. This is a huge problem. Here, I make the case that. Primary care physicians are broadly, grossly incompetent. This is due to empty credentialism. Making it much (~10X) easier to become a PCP is a good solution. Primary Care Physicians are Broadly, Grossly Incompetent.
    LessWrong | 4 days ago
  • Games that change your mind
    Crossposted from world spirit sock puppet. Some things you might learn from games are pretty blatant: Trivial Pursuit might teach you trivia, MasterType might teach you about typing, Grand Theft Auto might teach you about driving or crime. But sometimes … Continue reading →...
    Meteuphoric | 4 days ago
  • Understand why AI is a doom-risk in 39 captivating minutes
    Crossposted from world spirit sock puppet. I’ve really wanted more good short accounts of why AI poses an existential risk. Working on one myself has been one of those incredibly high priorities I keep putting off. Meanwhile award-winning journalist Ben … Continue reading →...
    Meteuphoric | 4 days ago
  • Games that change your mind
    Some things you might learn from games are pretty blatant: Trivial Pursuit might teach you trivia, MasterType might teach you about typing, Grand Theft Auto might teach you about driving or crime. But sometimes games teach people less obvious things—things that are more experiential or ineffable, things that you didn’t know you didn’t know, concepts that stick in your mind, deep things.
    Worldly Positions | 4 days ago
  • Ils nous RÉCOLTENT !
    ➡️ Passez à l'action sur les risques de l'IA : En quelques clics, alertez vos élus et envoyez le modèle de lettre préparé. C’est automatisé pour un minimum d’effort: https://taap.it/TF-PauseIACampagnes ⬇️⬇️⬇️ Infos complémentaires : sources, références, liens... ⬇️⬇️⬇️ Et si Matrix n'était pas une fiction, mais un documentaire sur votre attachement ?
    The Flares | 4 days ago
  • Understand why AI is a doom-risk in 39 captivating minutes
    I’ve really wanted more good short accounts of why AI poses an existential risk. Working on one myself has been one of those incredibly high priorities I keep putting off. Meanwhile award-winning journalist Ben Bradford of NPR has made a podcast version of my case for AI x-risk that I am thrilled with!
    World Spirit Sock Puppet | 4 days ago
  • How Go Players Disempower Themselves to AI
    Written as part of the MATS 9.1 extension program, mentored by Richard Ngo. From March 9th to 15th 2016, Go players around the world stayed up to watch their game fall to AI. Google DeepMind’s AlphaGo defeated Lee Sedol, commonly understood to be the world’s strongest player at the time, with a convincing 4-1 score.
    LessWrong | 4 days ago
  • Dairy cows make their misery expensive (but their calves can’t)
    How much do cows suffer in the production of milk? I can’t answer that; understanding animal experience is hard. But I can at least provide some facts about the conditions dairy cows live in, which might be useful to you in making your own assessment. My biggest conclusion is that cows made better choices than … Continue reading "Dairy cows make their misery expensive (but their calves can’t)"...
    Aceso Under Glass | 4 days ago
  • How much should the ideal person cry wolf?
    It is a fact about wolves and rationality that you should warn people about wolves quite a few times for every effective wolf attack. In particular, there is an asymmetry between the costs of having one’s flock devoured and averting a non-eventuating wolf attack. If the carnage is a hundred times worse, then it’s worth up to ninety-nine false alarms to stop it.
    LessWrong | 4 days ago
  • Are AI benchmarks doomed?
    In this episode, Greg Burnham and Tom Adamczewski join Anson Ho to push back on benchmark pessimism and dig into what the next generation of AI benchmarks could look like.
    Epoch Newsletter | 5 days ago
  • Conditional misalignment: Mitigations can hide EM behind contextual cues
    This is the abstract, introduction, and discussion of our new paper. We study three popular mitigations for emergent misalignment (EM) — diluting misaligned data with benign data, post-hoc HHH finetuning, and inoculation prompting — and show that each can leave behind conditional misalignment: the model reverts to broadly misaligned behavior when prompts contain cues from the misaligned...
    LessWrong | 5 days ago
  • Risk from fitness-seeking AIs: mechanisms and mitigations
    Current AIs routinely take unintended actions to score well on tasks: hardcoding test cases, training on the test set, downplaying issues, etc. This misalignment is still somewhat incoherent, but it increasingly resembles what I call " fitness-seeking"—a family of misaligned motivations centered on performing well in training and evaluations (e.g., reward-seeking).
    LessWrong | 5 days ago
  • Basic Rights for AIs
    The topic of AI welfare is fast becoming mainstream. As I wrote in my last post, there’s an emerging debate that has been drawing some strong reactions. There is some resistance to even treating AI welfare as a legitimate concern. But there’s a perhaps more understandable resistance—not to taking AI welfare seriously in general, but to particular […].
    Center for Reducing Suffering | 5 days ago
  • Sanity-checking “Incompressible Knowledge Probes”
    Or, did a chief scientist of an AI assistant startup conclusively show that GPT-5.5 has 9.7T parameters? . Introduction. Recently, a paper was circulated on Twitter claiming to have reverse engineered the parameter count of many frontier closed-source models including the newer GPT-5.5 (9.7T parameters) and Claude Opus 4.6 (5.3T parameters) as well as older models such as o1 (3.5T) and...
    LessWrong | 5 days ago
  • Risk from fitness-seeking AIs: mechanisms and mitigations
    Fitness-seeking is increasingly what misalignment looks like in practice—how should we respond?
    Redwood Research | 5 days ago
  • Risk from fitness-seeking AIs: mechanisms and mitigations
    Current AIs routinely take unintended actions to score well on tasks: hardcoding test cases, training on the test set, downplaying issues, etc. This misalignment is still somewhat incoherent, but it increasingly resembles what I call " fitness-seeking"—a family of misaligned motivations centered on performing well in training and evaluations (e.g., reward-seeking).
    AI Alignment Forum | 5 days ago
  • The Human Cost of Farming Animals – The Transfarmation Project
    The post The Human Cost of Farming Animals – The Transfarmation Project appeared first on Mercy For Animals.
    Mercy for Animals | 5 days ago
  • "Experts Say," Um, No They Don't
    A random professor's opinion isn't an expert consensus!
    Bentham's Newsletter | 5 days ago
  • AI unemployment and AI extinction are often the same
    My sense is that people think of AI existential risk and AI unemployment as distinct issues. Some people are extremely concerned about extinction and perhaps even indifferent to total unemployment. Some people think of moderate AI unemployment as a realistic and concerning issue, and AI extinction as science fiction.
    LessWrong | 5 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.