Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Why “Minds Aren’t Magic”?
    A lot of people straightforwardly believe that minds are magic: that our decision making is not simply the result of electricity and biochemistry in neurons and synapses in the brain, but at the core is the product of an immortal soul. This, of course, I reject. But it seems to me that a lot of Continue reading "Why “Minds Aren’t Magic”?"...
    Minds Aren’t Magic | 11 hours ago
  • Don’t Worry – It Can’t Happen
    (Originally a twitter thread) When @fermatslibrary brought up this 1940 paper about why we have nothing to worry about from nuclear chain reactions, I first checked that it was real and not a modern forgery. Because it seems almost too good to be true in the light of current AI safety talk. Yes, the paper was real: Harrington, […]...
    Andart II | 13 hours ago
  • 13 Arguments About a Transition to Neuralese AIs
    Over the past year, I have talked to several people about whether they expect frontier AI companies to transition away from the current paradigm of transformer LLMs toward models that reason in neuralese within the next few years. This post summarizes 13 common arguments I’ve heard, six in favor and seven against a transition to neuralese AIs. The following table provides a summary:
    LessWrong | 14 hours ago
  • Book Review: Plasticosis
    Plasticosis (free pdf available here) is a novel by Ikse Mennen about a postapocalyptic future in which it turns out that microplastics make people very, very sick with “plasticosis.” The resulting political disruption caused the United States to fracture into thousands of city-states.
    Thing of Things | 14 hours ago
  • A country of alien idiots in a datacenter: AI progress and public alarm
    Epistemic status: I'm pretty sure AI will alarm the public enough to change the alignment challenge substantially. I offer my mainline scenario as an intuition pump, but I expect it to be wrong in many ways, some important. Abstract arguments are in the Race Conditions and concluding sections... . Nora has a friend in her phone. Her mom complains about her new AI "colleagues.".
    LessWrong | 15 hours ago
  • Jane Goodall’s legacy is about animal rights — and the choices on our plates
    Op-ed by Michael Freeman published in The Sacramento Bee on October 30, 2025. The post Jane Goodall’s legacy is about animal rights — and the choices on our plates appeared first on Mercy For Animals.
    Mercy for Animals | 16 hours ago
  • Cyber Volunteers Convene in Madison, Wisconsin
    On October 23rd, the UC Berkeley Center for Long-Term Cybersecurity and Wisconsin Emergency Management were proud to host a packed room of cyber defenders across academia, state government,…. The post Cyber Volunteers Convene in Madison, Wisconsin appeared first on CLTC.
    Center for Long-Term Cybersecurity | 16 hours ago
  • Epoch’s Capabilities Index stitches together benchmarks across a wide range of difficulties
    Interpreting our new capabilities index
    Epoch Newsletter | 17 hours ago
  • Toward Statistical Mechanics Of Interfaces Under Selection Pressure
    Imagine using an ML-like training process to design two simple electronic components, in series. The parameters θ1 control the function performed by the first component, and the parameters θ2 control the function performed by the second component.
    LessWrong | 17 hours ago
  • Two easy digital intentionality practices
    A lot of people are daunted by the idea of doing a full digital declutter. Those people ask me all the time, “isn’t there something easier I can do that will still give me some of those sweet sweet benefits you were talking about?”. The answer is: sort of.
    LessWrong | 17 hours ago
  • A scheme to credit hack policy gradient training
    Thanks to Inkhaven for making me write this, and Justis Mills, Abram Demski, Markus Strasser, Vaniver and Gwern for comments. None of them endorse this piece. The safety community has previously worried about an AI hijacking the training process to change itself in ways that it endorses, but the developers don’t.
    AI Alignment Forum | 17 hours ago
  • Urgent Call for Accelerated Action on Climate-Nutrition Integration – Latest Assessment
    Urgent Call for Accelerated Action on Climate-Nutrition Integration – Latest Assessment gloireri Fri, 11/07/2025 - 19:22 London/Geneva. . For Immediate Release. Urgent Call for Accelerated Action on Climate-Nutrition Integration – Latest Assessment. Sub-Saharan Africa, Latin America and the Caribbean are leading the way.
    Global Alliance for Improved Nutrition | 17 hours ago
  • More pieces we would like to commission
    Write for us.
    The Works in Progress Newsletter | 18 hours ago
  • Growing the Coalition: Where the Metascience Alliance Is Headed
    Improving research works best when efforts are coordinated, not fragmented. The Metascience Alliance is a coalition for coordination and collaboration, connecting stakeholders, identifying and advancing shared priorities, and accelerating collective progress. Since we first introduced the Metascience Alliance at the 2025 Metascience Conference, 39 organizations have signed the Letter of...
    Center for Open Science | 19 hours ago
  • Incomparability Implies Nihilism
    If there are incomparable goods then nothing we do matters
    Bentham's Newsletter | 19 hours ago
  • A federal AI backstop is not as insane as it sounds
    Transformer Weekly: No B30A chips for China, Altman’s ‘pattern of lying’ and a watered down EU AI Act...
    Transformer | 20 hours ago
  • Factory Farming Fuels A Climate “Doom Loop”
    Extreme weather is killing millions of farmed animals and destroying farms worldwide, creating a cycle of suffering and loss. The post Factory Farming Fuels A Climate “Doom Loop” appeared first on Faunalytics.
    Faunalytics | 20 hours ago
  • A personal take on why (and why not) to work on AI safety at Open Philanthropy
    You may have noticed that Open Philanthropy is hiring for several roles in our GCR division: senior generalists across our global catastrophic risks team, and grantmakers for our technical AI safety team.
    Catherine’s Blog | 20 hours ago
  • Above the Thames
    What Happens When The Centre Cannot Hold (and isn't trying)
    Manifold Markets | 20 hours ago
  • What giving people money doesn’t fix
    By Caitlin Tulloch, Senior Director of Research, Learning, and Product at GiveDirectly. Caitlin is an expert in evidence-based policy and cost-effectiveness, and formerly served as Deputy Director in USAID’s Office of the Chief Economist. Summary: ❌ Cash isn’t a silver bullet and it can’t replace missing or broken systems.
    Effective Altruism Forum | 20 hours ago
  • Moral Self-Indulgence
    On Helping vs Expressing
    Good Thoughts | 21 hours ago
  • Job ad disguised as a blogpost
    Help me save this blog?
    Speculative Decoding | 22 hours ago
  • The Medici Method
    Florence’s leading medieval family turned a banking career into political power and paradigm shifts in art and science. Their methods hold lessons for philanthropy today. The post The Medici Method appeared first on Palladium.
    Palladium Magazine Newsletter | 22 hours ago
  • Strangers calling: the value of warm responses to cold outreach
    Tl;dr: I think responding to cold outreach (Linkedin DMs, emails) is a high-leverage way to help someone pursue an impactful career. Spending 15 minutes on Zoom or 5 minutes to share resources in writing can:
    Effective Altruism Forum | 23 hours ago
  • A personal take on why (and why not) to work on Open Philanthropy's AI teams
    You may have noticed that Open Philanthropy is hiring for several roles in our GCR division: senior generalists across our global catastrophic risks team, and grantmakers for our technical AI safety team. (We're also hiring for recruiting and operations roles! I know very little about either field, so I'm not going to talk about them here.).
    Effective Altruism Forum | 23 hours ago
  • A personal take on why (and why not) to work on AI safety at Open Philanthropy
    You may have noticed that Open Philanthropy is hiring for several roles in our GCR division: senior generalists across our global catastrophic risks team, and grantmakers for our technical AI safety team. (We're also hiring for recruiting and operations roles! I know very little about either field, so I'm not going to talk about them here.).
    Effective Altruism Forum | 23 hours ago
  • What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)
    Karl Koch is founder of the AI Whistleblower Initiative. He joins the podcast to discuss transparency and protections for AI insiders who spot safety risks. We explore current company policies, legal gaps, how to evaluate disclosure decisions, and whistleblowing as a backstop when oversight fails.
    Future of Life Institute | 24 hours ago
  • In What Sense Is Life Suffering?
    Astral Codex Ten | 1 days ago
  • Things I Learned from College
    (that I still remember a decade later). Evolution on Earth. Fact 1: When foxes are bred to be more docile, their ears become floppy like dogs’ ears instead of pointy like wild foxes’. Fact 2: Crows can learn to use a short stick to fetch a longer stick to fetch food. The basic setup of the experiment is: There’s a box with some food at the bottom. The crow can’t reach the food.
    Philosophical Multicore | 1 days ago
  • Debunking “When Prophecy Fails”
    In 1954, Dorothy Martin predicted an apocalyptic flood and promised her followers rescue by flying saucers. When neither arrived, she recanted, her group dissolved, and efforts to proselytize ceased. But When Prophecy Fails (1956), the now-canonical account of the event, claimed the opposite: that the group doubled down on its beliefs and began recruiting—evidence, the authors argued, of a new...
    LessWrong | 1 days ago
  • October 2025 AI safety news: Adaptive attacks, Tokenization, Impossible tasks
    These days, I imagine it is rough for researchers working on LLM defenses.
    AI Safety Takes | 1 days ago
  • Augustine of Hippo's Handbook on Faith, Hope, and Love in Latin (or: Claude as Pandoc++)
    tl;dr Here’s a pdf. The story of me making it is slightly fun. Augustine of Hippo, a prominent Christian of the 4th and 5th centuries who is recognized as a saint by many churches, wrote many things, including a work known as the Handbook on Faith, Hope, and Love (or the Enchiridion on Faith, Hope, and Love, or replace “Love” with “Charity”, or, in Latin, Enchiridion de Fide, Spe, et...
    Daniel Filan | 1 days ago
  • Elon $1T comp approved 💰, Google TPUs threaten Nvidia ⚡, agents from scratch 👨‍💻
    TLDR AI | 1 days ago
  • Galaxy brain resistance
    Vitalik Buterin | 1 days ago
  • The Humane League is hiring!
    🐓 About the role 🐓. The Humane League is seeking a self-motivated, organized, collaborative individual with the drive to create progressive change for millions of farmed animals as part of our Global Corporate Engagement team.
    Animal Advocacy Forum | 2 days ago
  • How to be convincing when talking to people about existential threat from AI
    I think I’m pretty good at convincing people about AI dangers in personal conversations. This post talks about the basics of speaking convincingly about AI dangers to people. Prerequisites. I. Learn to truly see them. In 2022, at a CFAR workshop, I was introduced to circling. It is multi-player meditation.
    LessWrong | 2 days ago
  • Communities to support your high-impact career
    Once you’ve decided to try and develop a great, impactful career, it can help to do it alongside others. Being part of an active community with similar goals can be a great source of motivation, insight, as well as potential career opportunities. Most of the communities we list below are... Read more...
    Probably Good | 2 days ago
  • Recommended courses for high-impact careers
    On this page, we’ve collected some of the best courses we know of for upskilling within various careers and cause areas. The large majority of them are free to access. Most of them are able to be completed at your own pace, though some are more structured, offering regular interaction... Read more...
    Probably Good | 2 days ago
  • Clarification: I really fucking like my job
    Recently, I wrote a blog post arguing that you shouldn’t become a blogger.
    Thing of Things | 2 days ago
  • EA Summit 2025: Paris - Retrospective
    On September 13th, 2025, we (EA France) hosted the EA Summit: Paris, in a very central location in the French capital. This event was part of the international series of EA Summits funded by the Centre for Effective Altruism.
    Effective Altruism Forum | 2 days ago
  • Geometric UDT
    This post owes credit to discussions with Caspar Oesterheld, Scott Garrabrant, Sahil, Daniel Kokotajlo, Martín Soto, Chi Nguyen, Lukas Finnveden, Vivek Hebbar, Mikhail Samin, and Diffractor. Inspired in particular by a discussion with cousin_it. Short version:
    AI Alignment Forum | 2 days ago
  • The Moral Decrepitude of Tucker Carlson
    Being opposed to giving a softball interview to a Hitler supporter isn't cancel culture
    Bentham's Newsletter | 2 days ago
  • November 2025
    Brain Misallocation, Transformative AI & Optimism
    Global Development & Economic Advancement | 2 days ago
  • History suggests the AI backlash will fail
    From Venetian monks to 19th century textile workers, those opposing new technology have rarely come out on top
    Transformer | 2 days ago
  • Troubling Trends In Companion Animal Adoption
    Although financial and housing barriers to companion animal adoption are growing in Canada and the U.S., there are key opportunities for advocates to take action. The post Troubling Trends In Companion Animal Adoption appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • UT Austin’s Statement on Academic Integrity
    A month ago William Inboden, the provost of UT Austin (where I work), invited me to join a university-wide “Faculty Working Group on Academic Integrity.” The name made me think that it would be about students cheating on exams and the like. I didn’t relish the prospect but I said sure. Shortly afterward, Jim Davis, […]...
    Shtetl-Optimized | 2 days ago
  • Halfway to Anywhere
    “If you can get your ship into orbit, you’re halfway to anywhere.” - Robert Heinlein. This generalizes. I. Spaceflight is hard. Putting a rocket on the moon is one of the most impressive feats humans have ever achieved.
    LessWrong | 2 days ago
  • The economics of the baby bust with Jesús Fernández-Villaverde
    Episode nine of the Works in Progress podcast.
    The Works in Progress Newsletter | 2 days ago
  • Hidden Open Thread 406.5
    Astral Codex Ten | 2 days ago
  • The Bloomer's Paradox
    Astral Codex Ten | 2 days ago
  • Cash Back
    When I was 18, my dad took me to the bank to get my first credit card. I had a conversation with the bank teller that went something like this: Bank teller: This card gives 1% cash back. Me: What does that mean?. Bank teller: It means when you spend money with the card, you get 1% cash back. Me: But what does cash back mean, though?. Bank teller: It means you get cash back. Me: ….
    Philosophical Multicore | 2 days ago
  • Advancing Synergies Across Nutrition and Climate Action I-CAN ASSESSMENT 2025
    Advancing Synergies Across Nutrition and Climate Action I-CAN ASSESSMENT 2025 admin_inox Thu, 11/06/2025 - 08:16 Advancing Synergies Across Nutrition and Climate Action. I-CAN ASSESSMENT 2025. The Climate & Nutrition Story. Climate & Nutrition Integration. Climate & Nutrition Financing. Download Report Now. . . Overview.
    Global Alliance for Improved Nutrition | 2 days ago
  • Advancing Synergies Across Nutrition and Climate Action I-CAN ASSESSMENT 2025
    Advancing Synergies Across Nutrition and Climate Action I-CAN ASSESSMENT 2025 admin_inox Thu, 11/06/2025 - 08:16 Accelerating Action and Opening Opportunities. A Closer Integration of Climate and Nutrition. . I-CAN ASSESSMENT 2025. The Climate & Nutrition Story. Climate & Nutrition Intergration. Climate Financing. Download Report Now. Overview.
    Global Alliance for Improved Nutrition | 2 days ago
  • 12 Theses on EA
    This is a crosspost from my Substack, where people have been liking and commenting a bunch. I'm too busy during my self-imposed version of Inkhaven to engage much – yes, pity me, I have to blog – but I don't want to leave Forum folks out of the loop! . . I’ve been following Effective Altruism discourse since 2014 and involved with the Effective Altruist community since 2015.
    Effective Altruism Forum | 2 days ago
  • Local and traditional food markets for thriving ‘’smart cities’’
    Local and traditional food markets for thriving ‘’smart cities’’ gloireri Thu, 11/06/2025 - 07:50 . Local and traditional food retail markets are inherent in a city’s social fabric and the urban food environment. Millions of residents connect daily through food at local and traditional markets; and for many low income urban residents, this is their primary source of food. .
    Global Alliance for Improved Nutrition | 2 days ago
  • People Seem Funny In The Head About Subtle Signals
    WARNING: This post contains spoilers for Harry Potter and the Methods of Rationality, and I will not warn about them further. Also some anecdotes from slutcon which are not particularly NSFW, but it's still slutcon. A girl I was seeing once asked me to guess what Hogwarts house she was trying to channel with her outfit.
    LessWrong | 2 days ago
  • A 2032 Takeoff Story
    I spent 3 recent Sundays writing my mainline AI scenario. Having only spent 3 days on it, it’s not very well-researched (especially in the areas where i’m not well informed) or well-written, and the endings are particularly open ended and weak. But I wanted to post the somewhat unfiltered 3-day version to normalize doing so.
    LessWrong | 2 days ago
  • Interview with Nebula-nominated SF writer P. H. Lee
    P.
    Thing of Things | 2 days ago
  • Introducing our new, more powerful search
    Finding what you’re looking for, or discovering something new, has never been easier.
    Our World in Data | 2 days ago
  • Apple Gemini deal terms 💰, Amazon layoff turmoil 💼, compiler targets 👨‍💻
    TLDR AI | 2 days ago
  • The Protein Problem
    Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. People can’t get enough protein. Fully 61% of Americans say they ate more protein last year — and 85% intended to eat more this year.
    Effective Altruism Forum | 3 days ago
  • Event Recap – Establishing AI Risk Thresholds and Red Lines: A Critical Global Policy Priority
    A CLTC–UC Berkeley and India AI Impact Summit 2026 Pre-Summit Discussion This recap was based on an event summary originally authored by Deepika Raman. As part of UC…. The post Event Recap – Establishing AI Risk Thresholds and Red Lines: A Critical Global Policy Priority appeared first on CLTC.
    Center for Long-Term Cybersecurity | 3 days ago
  • Meta-agentic Prisoner's Dilemmas
    Crosspost from my blog. In the classic Prisoner's Dilemma (https://www.lesswrong.com/w/prisoner-s-dilemma), there are two agents with the same beliefs and decision theory, but with different values. To get the best available outcome, they have to help each other out (even if they don't intrinsically care about the other's values); and they have to do so even though, if the one does not help...
    LessWrong | 3 days ago
  • Could humanity survive a total loss of agriculture?
    While technology and trade have made modern food systems increasingly resilient to disruptions, it is unknown if human society could survive the most extreme threats to agriculture, such as from severe climate change or nuclear/biological warfare. One way that society could withstand such disruptions is to make food without.
    Defenses in Depth | 3 days ago
  • Calling all mentors, the Tsunami is coming
    the latest updates on the welfare of future sentient beings
    AI for Animals | 3 days ago
  • New homepage for AI safety resources – AISafety.com redesign
    For those relatively new to AI safety, AISafety.com helps them navigate the space, providing lists of things like self-study courses, funders, communities, etc. But while the previous version of the site basically just threw a bunch of resources at the user, we’ve now redesigned it to be more accessible and therefore make it more likely that people take further steps towards entering the field.
    LessWrong | 3 days ago
  • Meta-agentic Prisoner's Dilemmas
    Crosspost from my blog. In the classic Prisoner's Dilemma (https://www.lesswrong.com/w/prisoner-s-dilemma), there are two agents with the same beliefs and decision theory, but with different values. To get the best available outcome, they have to help each other out (even if they don't intrinsically care about the other's values); and they have to do so even though, if the one does not help...
    AI Alignment Forum | 3 days ago
  • Sora is here. The window to save visual truth is closing
    Opinion: Sam Gregory argues that generative video is undermining the notion of a shared reality, and that we need to act before it’s lost forever...
    Transformer | 3 days ago
  • The Protein Problem
    We likely can't stop the protein craze — but we can make it less cruel...
    Farm Animal Welfare Newsletter | 3 days ago
  • AI Isn't Bad For The Environment
    Footnotes to Masley
    Bentham's Newsletter | 3 days ago
  • Faunalytics Index – November 2025
    This month’s Faunalytics Index provides facts and stats about the enforcement of U.K. farmed animal welfare laws, ocean theme parks in China, BIPOC adoption experiences, and more. The post Faunalytics Index – November 2025 appeared first on Faunalytics.
    Faunalytics | 3 days ago
  • Helen Toner on the geopolitics of AI in China and the Middle East
    The post Helen Toner on the geopolitics of AI in China and the Middle East appeared first on 80,000 Hours.
    80,000 Hours | 3 days ago
  • Being "Usefully Concrete"
    Or: "Who, what, when, where?" -> "Why?". . In "What's hard about this? What can I do about that? ", I talk about how, when you're facing a difficult situation, it's often useful to list exactly what's difficult about it. And then, systematically brainstorm ideas for dealing with those difficult things. Then, the problem becomes easy.
    LessWrong | 3 days ago
  • Modeling the geopolitics of AI development
    We model how rapid AI development may reshape geopolitics in the absence of international coordination on preventing dangerous AI development. We focus on predicting which strategies would be pursued by superpowers and middle powers and which outcomes would result from them. You can read our paper here: ai-scenarios.com.
    LessWrong | 3 days ago
  • Evolution under a microscope
    Generations of microbes evolve in hours, not millennia. By speeding up Darwin’s clock, scientists have watched evolution happen in real time, and it’s changed how we understand natural selection.
    The Works in Progress Newsletter | 3 days ago
  • Is This Anything? 21
    More attempts
    Atoms vs Bits | 3 days ago
  • Our top tips for successful networking
    The post Our top tips for successful networking appeared first on 80,000 Hours.
    80,000 Hours | 3 days ago
  • Are we wrong to stop factory farms?
    I'm sharing an article by Rose Patterson from Animal Rising (AR), which responds to an EA criticism of AR's campaigns to block new factory farms in the UK. When we started our Communities Against Factory Farming campaign to stop every new factory farm from being built, we thought it would be a crowd-pleaser! Who in the animal movement could argue that this could actually be a bad thing?
    Effective Altruism Forum | 3 days ago
  • EA Forum Digest #265
    EA Forum Digest #265 Hello!. Two giving season updates: I’ve announced the rewards for donors to our Donation Election Fund - you can read more here. Next week will be Funding Strategy Week on the EA Forum. Consider writing something!. Enjoy the Digest, Toby (for the Forum team) We recommend: Humanizing Expected Value (kuhanj, 3 min).
    EA Forum Digest | 3 days ago
  • AnimalHarmBench 2.0: Evaluating LLMs on reasoning about animal welfare
    We are pleased to introduce AnimalHarmBench (AHB) 2.0, a new standardized LLM benchmark designed to measure multi-dimensional moral reasoning towards animals, now available to use on Inspect AI. As LLM's influence over policies and behaviors of humanity grows, its biases and blind spots will grow in importance too.
    Effective Altruism Forum | 3 days ago
  • New study  on  cluster  randomised controlled trials  for  mosquito  release interventions  such as gene drive
    New technologies for suppressing populations of disease-transmitting mosquitoes involve releasing modified male mosquitoes into field environments. One example of these technologies is the sterile insect technique (SIT) which releases male mosquitoes that have been sterilised using radiation treatment.
    Target Malaria | 3 days ago
  • November Brief | Your guide to high-impact giving this season
    November Brief | Your guide to high-impact giving this season New opportunities at Open Philanthropy, The Humane League, Future of Life Institute & more ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌...
    EACN Newsletter | 3 days ago
  • Legible vs. Illegible AI Safety Problems
    Some AI safety problems are legible (obvious or understandable) to company leaders and government policymakers, implying they are unlikely to deploy or allow deployment of an AI while those problems remain open (i.e., appear unsolved according to the information they have access to).
    Effective Altruism Forum | 3 days ago
  • How Can I Not Know Whether I'm Having a Good Experience?
    I’m playing Elden Ring. I’m fighting a difficult boss 1, and I’m getting kind of frustrated. I die again. I’m thinking about whether I want to keep playing. I don’t know. Am I having a good time? I can’t tell. How is it that I can’t tell?. Fundamentally, good experiences are good, and bad experiences are bad. But what if I don’t know whether I’m having a good experience? How is that possible?
    Philosophical Multicore | 3 days ago
  • Pillar 4: CHEWs Boosting Healthy Living Through PDM
    The post Pillar 4: CHEWs Boosting Healthy Living Through PDM appeared first on Living Goods.
    Living Goods | 3 days ago
  • Nyong’o launches Kisumu wellness protocol to boost preventive healthcare
    The post Nyong’o launches Kisumu wellness protocol to boost preventive healthcare appeared first on Living Goods.
    Living Goods | 3 days ago
  • Stall ohne Dach: Auslauf oder nicht?
    Sentience Politics | 3 days ago
  • You are going to get priced out of the best AI coding tools
    The best AI tools will become far more expensive. Andy Warhol famously said:
    AI Safety Takes | 3 days ago
  • Notes on forecasting strategy
    Becoming a top forecaster may not work exactly how you think it does...
    Samstack | 3 days ago
  • Thoughts by a non-economist on AI and economics
    [Crossposted on Windows In Theory] . “Modern humans first emerged about 100,000 years ago. For the next 99,800 years or so, nothing happened. Well, not quite nothing. There were wars, political intrigue, the invention of agriculture -- but none of that stuff had much effect on the quality of people's lives.
    LessWrong | 3 days ago
  • New CLTC Guide Focuses on Cybersecurity for Mutual Aid Organizations
    The UC Berkeley Cybersecurity Clinic and Fight for the Future have collaborated to create a guide aimed at enhancing the cybersecurity practices of mutual aid organizations. Released as part of the CLTC White Paper Series, the guide — Securing Mutual Aid: Cybersecurity Practices and Design Principles for Financial Technology — outlines best practices to help mutual aids use financial...
    Center for Long-Term Cybersecurity | 3 days ago
  • Heroic Responsibility
    Meta: Heroic responsibility is a standard concept on LessWrong. I was surprised to find that we don't have a post explaining it to people not already deep in the cultural context, so I wrote this one. Suppose I decide to start a business - specifically a car dealership. One day there's a problem: we sold a car with a bad thingamabob.
    LessWrong | 3 days ago
  • Low cost MacBook 💻, Google AI data centers 🛰️, inside xAI 🤖
    TLDR AI | 3 days ago
  • Temporary Policy Lead at NARN
    Policy and Programs Lead (temp). Organization: Northwest Animal Rights Network (NARN) Location: Remote (Washington-based preferred) Hours: 30 hours per week Compensation: $21-$26/hr Reports to: NARN Board. About NARN. The Northwest Animal Rights Network (NARN) advocates for the rights and well-being of all animals through education, advocacy, and community building.
    Animal Advocacy Forum | 4 days ago
  • Legible vs. Illegible AI Safety Problems
    Some AI safety problems are legible (obvious or understandable) to company leaders and government policymakers, implying they are unlikely to deploy or allow deployment of an AI while those problems remain open (i.e., appear unsolved according to the information they have access to).
    LessWrong | 4 days ago
  • Breaking: Hundreds of Farmers Sent a Strong Message on Capitol Hill: Congress Should Leave Prop 12 Alone
    In October, over 200 farmers from across the nation came together on Capitol Hill in support of California’s Proposition 12 and in opposition to the EATS Act and any legislative language like it. The farmers held an impactful press conference at the National Press Building, engaged in a tractor and truck rally on the Hill, […].
    Mercy for Animals | 4 days ago
  • Legible vs. Illegible AI Safety Problems
    Some AI safety problems are legible (obvious or understandable) to company leaders and government policymakers, implying they are unlikely to deploy or allow deployment of an AI while those problems remain open (i.e., appear unsolved according to the information they have access to).
    AI Alignment Forum | 4 days ago
  • Scale-up: the neglected bottleneck facing alternative proteins
    Hi everyone! I'm Alex, Managing Director of the Good Food Institute Europe. Thanks so much for taking the time to read my post on why scale-up is so important for the future of alternative proteins. This is a topic we're really passionate about at GFI Europe, and one that's becoming increasingly important as the sector matures. I’ll be around in the comments and happy to answer any questions.
    Effective Altruism Forum | 4 days ago
  • Why We Don’t Act On Our Values (And What to Do About It)
    Discover why people often fail to act on their own values — and how to close the gap between what you care about and what you do. Learn how despair, denial, and defiance hold us back, and find research-based ways to live more in line with your principles.
    Clearer Thinking | 4 days ago
  • GDM: Consistency Training Helps Limit Sycophancy and Jailbreaks in Gemini 2.5 Flash
    Authors: Alex Irpan* and Alex Turner*, Mark Kurzeja, David Elson, and Rohin Shah. You’re absolutely right to start reading this post! What a rational decision!. Even the smartest models’ factuality or refusal training can be compromised by simple changes to a prompt.
    LessWrong | 4 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.