Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Against The Omnipresent Advantage Argument For Trans Sports
    Astral Codex Ten | 3 hours ago
  • Principles and Generators of a Rationality Dojo
    Since 2023 I've been directing WARP, the Wandering Applied Rationality Program, with the help of ESPR and SPARC staff, which are summer camps I've taught at since 2017. For those that don't know, these ~10 day programs are hard to summarize well, but are generally meant to create an environment that helps participants better understand the world, themselves, and each other by offering a...
    LessWrong | 4 hours ago
  • Microsoft’s Fairwater datacenter will use more power than Los Angeles
    The next generation of AI data center campuses are city-scale.
    Epoch Newsletter | 5 hours ago
  • Postmodernism for STEM Types: A Clear-Language Guide to Conflict Theory
    Crossposted from Susbstack. Section I : Opening. In 2021, Richard Dawkins tweeted: . The fallout was immediate. The American Humanist Association revoked an award they’d given him 25 years earlier. A significant controversy erupted, splitting roughly into two camps. One camp defended Dawkins. They saw him raising a legitimate question about logical consistency.
    LessWrong | 5 hours ago
  • Forecasting Claude
    Above the Fold is "Playing it Safe"
    Manifold Markets | 5 hours ago
  • Training PhD Students to be Fat Newts (Part 2)
    [Thanks Inkhaven for hosting me! This is my fourth and last post and I'm already exhausted from writing. Wordpress.com!]. Last time, I introduced the concept of the “Fat Newt” (fatigue neutral) build, a way of skilling up characters in Battle Brothers that aims to be extremely economical with the fatigue resource, relying entirely on each brother’s base 15 fatigue regeneration per turn.
    LessWrong | 5 hours ago
  • Snippets on Living In Reality
    Social reality is quite literally another world, in the same sense that the Harry Potter universe is another world. Like the Harry Potter universe, social reality is a world portrayed primarily in text and in speech and in our imaginations. Like the Harry Potter universe, social reality doesn’t diverge completely from physical reality - they contain mostly the same cities, for instance.
    LessWrong | 5 hours ago
  • My idea of what's, like, #3 trending on Redtube right now
    Content warnings: sex, dubcon, drug use.
    Thing of Things | 6 hours ago
  • Courtship Confusions Post-Slutcon
    Going into slutcon, one of my main known-unknowns was… I’d heard many times that the standard path to hooking up or dating starts with two people bantering for an hour or two at a party, lacing in increasingly-unsubtle hints of interest. And even in my own imagination, I was completely unable to make up a plausible-sounding conversation which would have that effect.
    LessWrong | 6 hours ago
  • AI discourse analyzed (we looked at essays, Twitter, Bluesky, Truth Social)
    AI In Group Discourse. I wanted to programmatically analyze the AI in-group ecosystem and discourse using AI as an exploration in sensemaking during my time in the “ AI for Human Reasoning” FLF fellowship.
    Effective Altruism Forum | 7 hours ago
  • Training PhD Students to be Fat Newts (Part 1)
    Today, I want to introduce an experimental PhD student training philosophy. Let’s start with some reddit memes. Every gaming subreddit has its own distinct meme culture. On r/chess, there's a demon who is summoned by an unsuspecting beginner asking “Why isn’t this checkmate?”. . These posts are gleefully deluged by responses saying “Google En Passant” in some form or other.
    LessWrong | 8 hours ago
  • The Economics of Replacing Call Center Workers With AIs
    TLDR: Voice AIs aren't that much cheaper in the year 2025: My friend runs a voice agent startup in Canada for walk-in clinics. The AI takes calls and uses tools to book appointments in the EMR (electronic medical record) system. In theory, this helps the clinic hire less front desk staff and the startup makes infinite money. In reality, the margins are brutal and they barely charge above cost.
    LessWrong | 8 hours ago
  • Evaluating honesty and lie detection techniques on a diverse suite of dishonest models
    TL;DR: We use a suite of testbed settings where models lie—i.e. generate statements they believe to be false—to evaluate honesty and lie detection techniques. The best techniques we studied involved fine-tuning on generic anti-deception data and using prompts that encourage honesty. Read the full Anthropic Alignment Science blog post and the X thread. Introduction:
    LessWrong | 8 hours ago
  • Evolution & Freedom
    In Against Money Maximalism, I argued against money-maximization as a normative stance. Profit is a coherent thing you can try to maximize, but there are also other sorts of value. Profit-maximization isn't the unique rational way to engage with money. One way you could respond to this is economic Darwinism: "Sure, you can optimize other things than money, but over time, the market will come...
    LessWrong | 8 hours ago
  • Subliminal Learning Across Models
    Tl;dr: We show that subliminal learning can transfer sentiment across models (with some caveats). For example, we transfer positive sentiment for Catholicism, the UK, New York City, Stalin or Ronald Reagan across model families using normal-looking text. This post discusses under what conditions this subliminal transfer happens. —.
    AI Alignment Forum | 8 hours ago
  • Donation Election, diversity talent bank, career paths in Africa
    Your farmed animal advocacy update for late November 2025
    Hive | 9 hours ago
  • Building the Fortress
    Reframing Suffering-Focused Ethics By David Veldran Many moral views and social projects present themselves as inherently positive and constructive. They aim to add something to the world, to create, to build. Classical utilitarians seek to increase happiness and bring about a surplus of joy over misery. Communists aspire to realize a classless society. Kantians, perhaps, aim to […].
    Center for Reducing Suffering | 10 hours ago
  • Scared to Pledge? 5 financial steps for confident giving
    Cross-posted from the @High Impact Professionals blog - see the original here. --. In my years of working with people on their financial plans for giving, I’ve met many wonderful individuals who feel morally compelled to take a giving pledge. Intellectually, they are fully aware that they are in a position to help others. But emotionally, they aren’t ready to commit to a pledge.
    Effective Altruism Forum | 11 hours ago
  • The history of vaccines
    The early smallpox vaccines that kept dying out, why Émile Roux drilled into rabbits' skulls, and the lucky career changes that saved millions of lives.
    The Works in Progress Newsletter | 11 hours ago
  • You don't want to date multiple people who are monogamous with you
    Sometimes people say, “obviously, everyone would like to be polyamorous themselves and get to have sex with anyone they want, while their partners are all monogamous and only have sex with them.
    Thing of Things | 11 hours ago
  • Public Acceptability Of Standard U.S. Animal Agriculture Practices
    A Faunalytics study reveals that a clear majority of the U.S. public finds standard animal agriculture practices for pigs, cows, and chickens to be unacceptable. Explore the details and their implications for animal advocacy. The post Public Acceptability Of Standard U.S. Animal Agriculture Practices appeared first on Faunalytics.
    Faunalytics | 12 hours ago
  • SB 53 protects whistleblowers in AI — but asks a lot in return
    Opinion: Abra Ganz and Karl Koch argue that whistleblower protections in SB-53 aren’t good enough on the face of it — but how the state chooses to interpret the law could turn that around...
    Transformer | 12 hours ago
  • For A Short Period Of Time, You Can Save 21,000 Shrimp Per Dollar
    5 arguments for doing so!
    Bentham's Newsletter | 12 hours ago
  • Four Key Ways To Help Farmed Animals In Nigeria
    Although public awareness of animal welfare is relatively low in Nigeria, this report provides a roadmap for advocates looking to make a difference for the country’s growing number of farmed animals. The post Four Key Ways To Help Farmed Animals In Nigeria appeared first on Faunalytics.
    Faunalytics | 12 hours ago
  • Why the West was downzoned
    In the space of a few decades, nearly every city in the Western world banned densification. What happened?
    The Works in Progress Newsletter | 13 hours ago
  • 2025 Results, 2026 Plans and Funding Needs
    We have just released our end-of-year 2025 Results, 2026 Plans, and Funding Needs document that demonstrates what a donation to RP can accomplish: from contributing to better allocation of millions of…...
    Rethink Priorities | 14 hours ago
  • Is This Anything? 22
    Recently I've been getting into Gratitude Patrols. People talk about gratitude journals as being one of the few provably impactful psychological interventions [citation needed], but I think there's a benefit to physically embodying it by walking around your space each morning and thanking things. If you.
    Atoms vs Bits | 15 hours ago
  • Nutrition Investing: Moving from Awareness to Action
    Nutrition Investing: Moving from Awareness to Action gloireri Wed, 11/26/2025 - 14:04 . Investing in nutrition isn’t just possible, it’s smart. That’s the key message that sticks with us a few weeks after the GIIN Impact Forum 2025, where we organised a session, “Nutrition Lens Investing: A Framework for Action”.
    Global Alliance for Improved Nutrition | 15 hours ago
  • 3 doubts about veganism
    I keep thinking about what kind of identity would be useful for building a powerful animal advocacy movement. Here are 3 features of veganism that I often think about which make me doubt its usefulness. Too maximalist. The official definition of veganism by the inventors of the term is the following:
    Effective Altruism Forum | 15 hours ago
  • Understanding Moral Disagreement Through Data
    We studied moral judgments in the U.S. and found extreme disagreement. Learn how 15 moral dimensions predict people’s moral views.
    Clearer Thinking | 16 hours ago
  • Nuevo ranking de «Top invitadores» | ¿Qué regalar estas Navidades?
    Ayuda Efectiva | 17 hours ago
  • EA Forum Digest #268
    EA Forum Digest #268 Hello!. Three things to note: The Donation Election is now open! You can go vote on the Forum (if you had an account before October 24th). Read about all the candidates here (if you spot an error in this post, I have to donate $10), and read the rules here.
    EA Forum Digest | 17 hours ago
  • Vage Versprechen schützen keine Tiere – Transparenz schon
    Sentience Politics | 18 hours ago
  • Takeaways from the Eleos Conference on AI Consciousness and Welfare
    Crossposted from my Substack... . I spent the weekend at Lighthaven, attending the Eleos conference. In this post, I share thoughts and updates as I reflect on talks, papers, and discussions, and put out some of my takes since I haven't written about this topic before.
    LessWrong | 18 hours ago
  • Magic: The Gathering Arena decklists for people on a budget
    I’ve been playing a lot of MTG Arena lately, but I refuse to spend any money on it, which means I can’t craft many rare cards. When I look up meta decklists, they always include a lot of rares and mythic rares. I don’t want to spend all my rare wildcards on one deck!. That’s sort of what the Pauper format is for. Pauper decks are only allowed to use common cards, which makes them cheap.
    Philosophical Multicore | 20 hours ago
  • The Eugenics Debate: A postmortem
    On Suffering, Sterilization, and Why We Somehow Spent So Much Time Talking About Incest
    Dissentient | 21 hours ago
  • Epoch AI webinar: Inside the Frontier Data Centers hub
    Showcasing the new Epoch AI Frontier Data Centers hub.
    Epoch Newsletter | 22 hours ago
  • 7-Eleven reports cage-free progress for first time (US/CAN)
    7-Eleven, the biggest convenience store in the US (and the world) has reported its cage-free progress for the first time ever: We’re committed to working with suppliers toward a goal of sourcing 100 percent cage-free eggs for all U.S. and Canada stores by 2025, based on available supply.
    Animal Advocacy Forum | 1 days ago
  • Why AI Safety Won't Make America Lose The Race With China
    Astral Codex Ten | 1 days ago
  • OpenAI finetuning metrics: What is going on with the loss curves?
    Introduction. For our current project, we've been using the OpenAI fine-tuning API. To run some of our experiments, we needed to understand exactly how the reported metrics (loss and accuracy) are calculated. Unfortunately, the official documentation is sparse, and the most detailed explanation we could find was the following table from Microsoft's Azure documentation:
    LessWrong | 1 days ago
  • Nuevo ranking de «Top invitadores»
    Ayuda Efectiva | 1 days ago
  • Elon's reusable tunnelers 🚇, Alibaba's AI app 📱, Gemini 3 API features 👨‍💻
    TLDR AI | 1 days ago
  • How to Talk to Journalists
    TLDR: Just pick up the phone. Agree on permissions for information before you share it (see Step 4). I have worked as a professional journalist covering AI for over a year, and during that time, multiple people working in AI safety have asked me for advice on engaging with journalists. At this point, I've converged on some core lessons, so I figured I should share them more widely.
    Effective Altruism Forum | 1 days ago
  • Alignment will happen by default. What’s next?
    I’m not 100% convinced of this, but I’m fairly convinced, more and more so over time. I’m hoping to start a vigorous but civilized debate. I invite you to attack my weak points and/or present counter-evidence. My thesis is that intent-alignment is basically happening, based on evidence from the alignment research in the LLM era. Introduction.
    LessWrong | 1 days ago
  • Who should direct social spending?
    Individuals, Corporations, or Governments?
    Good Thoughts | 1 days ago
  • Alignment will happen by default. What’s next?
    I’m not 100% convinced of this, but I’m fairly convinced, more and more so over time. I’m hoping to start a vigorous but civilized debate. I invite you to attack my weak points and/or present counter-evidence. My thesis is that intent-alignment is basically happening, based on evidence from the alignment research in the LLM era. Introduction.
    AI Alignment Forum | 1 days ago
  • Ilya Sutskever – We're moving from the age of scaling to the age of research
    “These models somehow just generalize dramatically worse than people. It's a very fundamental thing.”...
    The Lunar Society | 1 days ago
  • Rob & Luisa chat kids, the fertility crash, and how the ‘50s invented parenting that makes us miserable
    The post Rob & Luisa chat kids, the fertility crash, and how the ‘50s invented parenting that makes us miserable appeared first on 80,000 Hours.
    80,000 Hours | 1 days ago
  • Eating the Ocean
    Vox’s Future Perfect takes pride in being one of the top destinations in media for consistent, rigorous, ethically clear-eyed coverage of one of the biggest stories of our time — the mass production of billions of animals for food on factory farms. But there’s arguably an even bigger, even more neglected story hidden within that […]...
    Future Perfect | 1 days ago
  • Christianity Is Very Morally Revisionary
    On giant torture pits
    Bentham's Newsletter | 1 days ago
  • How Retailers Can Boost Plant-Based Food Sales
    Retail strategies to boost plant-based food sales work differently depending on location. The post How Retailers Can Boost Plant-Based Food Sales appeared first on Faunalytics.
    Faunalytics | 1 days ago
  • Issue 21: The Great Downzoning
    Plus: Why cities in poor countries need wider streets, how to measure competition, and the South Korean baby bust.
    The Works in Progress Newsletter | 2 days ago
  • November 2025 North America Newsletter
    November 2025 North America Newsletter J-PAL North America's November newsletter features a new blog post on interpreting results from the Baby's First Years study, a feature on our 2025 Research Staff Training, and our recent Evidence Matters Convening in Seattle, WA. kchristie@pove… Tue, 11/25/2025 - 10:35...
    J-PAL | 2 days ago
  • Hiring your favorite Substacker: a guide
    Sometimes people want to hire me to write or edit things for them.
    Thing of Things | 2 days ago
  • The Unjournal: Bridging the Rigor/Impact Gaps for EA-relevant Research Questions
    Overview. The Unjournal is a nonprofit organization (est. 2023) that commissions rigorous public expert evaluations of impactful research. We've built a strong team, completed more than 55 evaluation packages, built a database of impactful research, launched a Pivotal Questions initiative, and are systematically evaluating research from EA-aligned organizations. We argue that. 1.
    Effective Altruism Forum | 2 days ago
  • How and Why You Should Cut Your Social Media Usage
    In the past year or so, I’ve become increasingly convinced that much of the time spent on the internet and on social media is bad both for me personally, and for many people in developed countries.
    Samstack | 2 days ago
  • Community Health at the Centre of Africa’s Health Sovereignty: Insights from the Africa Health Summit
    The post Community Health at the Centre of Africa’s Health Sovereignty: Insights from the Africa Health Summit appeared first on Living Goods.
    Living Goods | 2 days ago
  • Closing the information gap on female genital schistosomiasis
    Up to 56 million women and girls across Africa are estimated to be affected by female genital schistosomiasis, a frequently overlooked debilitating manifestation of schistosomiasis. This short film explores what it will take to transform health systems in order to correctly diagnose and treat this stigmatizing disease.
    The END Fund | 2 days ago
  • Young Scientist Webinar: Africa Youth Month with Rita Mwima and Hudson Onen
    November is Africa Youth Month, and this year we are shining the spotlight on two scientists working to end malaria in Uganda. Rita Mwima and Hudson Onen are part of the team at the Uganda Virus Research Institute. Rita is a computational biologist using population genetics and mathematical modeling to study malaria vector dynamics. Her […].
    Target Malaria | 2 days ago
  • How to Fix Quidditch
    Inspired by this post by Tomás Bjartur, which is an allegory; but I’m not writing an allegory, I’m writing about the rules of Quidditch. The rules of Quidditch have a big problem. The game ends when a seeker catches the snitch, and the snitch is worth 150 points. So most of the players on the field don’t matter; in almost all games, the only thing that matters is who catches the snitch.
    Philosophical Multicore | 2 days ago
  • Podcast Wireframe
    Podcast Wireframe GAIN 🇰🇪 on Socials: . . . . gloireri Tue, 11/25/2025 - 08:11 . Share on...
    Global Alliance for Improved Nutrition | 2 days ago
  • Announcing ClusterFree: A cluster headache advocacy and research initiative (and how you can help)
    [xposted in EA Forum] Today we’re announcing a new cluster headache advocacy and research initiative: ClusterFree Learn more about how you (and anyone) can help. Our mission ClusterFree’s mission is to help cluster headache patients globally access safe, effective pain relief treatments as soon as possible through advocacy and research. Cluster headache (also known as […]...
    Qualia Computing | 2 days ago
  • Frontier Data Centers hub on mobile
    AI infrastructure, now in your pocket.
    Epoch Newsletter | 2 days ago
  • Maybe Insensitive Functions are a Natural Ontology Generator?
    The most canonical example of a "natural ontology" comes from gasses in stat mech. In the simplest version, we model the gas as a bunch of little billiard balls bouncing around in a box. The dynamics are chaotic. The system is continuous, so the initial conditions are real numbers with arbitrarily many bits of precision - e.g.
    LessWrong | 2 days ago
  • Claude Opus 4.5 🤖, Amazon's Starlink competitor 🛰️, building a search engine 👨‍💻
    TLDR AI | 2 days ago
  • Plinko PIR tutorial
    Vitalik Buterin | 2 days ago
  • Things my kids don’t know about sex
    Kind of a mixed bag. The post Things my kids don’t know about sex appeared first on Otherwise.
    Otherwise | 2 days ago
  • November 2025 Updates
    Every month we send an email newsletter to our supporters sharing recent updates from our work. We publish selected portions of the newsletter on our blog to make this news more accessible to people who visit our website. For key updates from the latest installment, please see below!. If you’d like to receive the complete newsletter in your inbox each month, you can subscribe here. Read More.
    GiveWell | 2 days ago
  • The Enemy Gets The Last Hit
    Disclaimer: I am god-awful at chess. I. Late-beginner chess players, those who are almost on the cusp of being basically respectable, often fall into a particular pattern. They've got the hang of calculating moves ahead; they can make plans along the lines of "Ok, so if I move my rook to give a check, the opponent will have to move her king, and then I can take her bishop."...
    LessWrong | 2 days ago
  • Monthly Spotlight: Cellular Agriculture Australia
    Discover how Cellular Agriculture Australia is building the ecosystem for cellular agriculture and driving systemic change for animals and the future food system. …  Read more...
    Animal Charity Evaluators | 2 days ago
  • Reasoning Models Sometimes Output Illegible Chains of Thought
    TL;DR: Models trained with outcome-based RL sometimes have reasoning traces that look very weird. In this paper, I evaluate 14 models and find that many of them often generate pretty illegible CoTs. I show that models seem to find this illegible text useful, with a model’s accuracy dropping heavily when given only the legible parts of its CoT, and that legibility goes down when answering...
    LessWrong | 2 days ago
  • Stop Applying And Get To Work
    TL;DR: Figure out what needs doing and do it, don't wait on approval from fellowships or jobs. If you.... Have short timelines. Have been struggling to get into a position in AI safety. Are able to self-motivate your efforts. Have a sufficient financial safety net. ... I would recommend changing your personal strategy entirely.
    LessWrong | 2 days ago
  • Inkhaven Retrospective
    Here I am on the plane on the way home from Inkhaven. Huge thanks to Ben Pace and the other organizers for inviting me. Lighthaven is a delightful venue and there sure are some brilliant writers taking part in this — both contributing writers and participants.
    LessWrong | 2 days ago
  • Reasoning Models Sometimes Output Illegible Chains of Thought
    TL;DR: Models trained with outcome-based RL sometimes have reasoning traces that look very weird. In this paper, I evaluate 14 models and find that many of them often generate pretty illegible CoTs. I show that models seem to find this illegible text useful, with a model’s accuracy dropping heavily when given only the legible parts of its CoT, and that legibility goes down when answering...
    AI Alignment Forum | 2 days ago
  • The Coalition
    Summary: A defensive military coalition is a key frame for thinking about our international agreement aimed at forestalling the development of superintelligence. We introduce historical examples of former rivals or enemies forming defensive coalitions in response to an urgent mutual threat, and detail key aspects of our proposal which are analogous.
    LessWrong | 2 days ago
  • The Humane League is hiring!
    🐓 As Temporary Global Campaigns Coordinator, you will be part of a team responsible for researching, coordinating, and launching hard-hitting global corporate animal welfare campaigns against major multinational companies. These campaigns involve collaboration and coordination with animal protection organizations around the world and directly contribute to The Humane League’s org-wide goal of...
    Animal Advocacy Forum | 2 days ago
  • URGENT: Easy Opportunity to Help Many Animals
    Crosspost. (I think this is a pretty important post to get the word out about, so I’d really appreciate you restacking it). The EU is taking input into their farm animal welfare policies until December 12.
    Effective Altruism Forum | 2 days ago
  • Philosophical Pattern-Matching
    The struggle to replace philosophical stereotypes with substance
    Good Thoughts | 2 days ago
  • How to create climate progress without political support
    How to make progress when policymakers don't care about the climate
    Effective Environmentalism | 2 days ago
  • 🟩 Ukraine peace plan, US continues activity around Venezuela, state AI regulation moratorium round 2 || Global Risks Weekly Roundup #47/2025
    Executive summary
    Sentinel | 2 days ago
  • Predicting Replicability Challenge: Round 1 Results and Round 2 Opportunity
    Replicability refers to observing evidence for a prior research claim in data that is independent of the prior research. It is one component of establishing credibility of research claims by demonstrating that there is a regularity in nature to be understood and explained. Repeating studies to assess and establish replicability can be costly and time-consuming. Partly, that is inevitable.
    Center for Open Science | 2 days ago
  • ChinAI #337: China's First AI English Teacher Earns its Stripes
    Why Chinese parents have sought out Zebra English's AI tutor Jessica
    ChinAI Newsletter | 2 days ago
  • We won't solve non-alignment problems by doing research
    Even if we solve the AI alignment problem, we still face non-alignment problems, which are all the other existential problems that AI may bring. People have written research agendas on various imposing problems that we are nowhere close to solving, and that we may need to solve before developing ASI.
    Effective Altruism Forum | 2 days ago
  • The Muslim Social: Neoliberalism, Charity, and Poverty in Turkey
    Editors’ Note: Gizem Zencirci introduces her book, The Muslim Social: Neoliberalism, Charity, and Poverty (Syracuse, 2024), which recently received the 2025 Outstanding Book Prize from ARNOVA (Association for Research on Nonprofit Organizations and Voluntary Action).
    HistPhil | 2 days ago
  • The Most Important Thing We'll Ever Do
    How to ensure the future flourishes
    Bentham's Newsletter | 2 days ago
  • Mixed views on pronoun circles
    Various groups, in an attempt to be trans-inclusive, have implemented pronoun circles (everyone goes around in a circle and give their pronouns), pronoun badges (everyone has their pronouns on their name badge), and pronouns in email signatures.
    Thing of Things | 2 days ago
  • A Review Of Restraints For Walking Your Dog
    Researchers unpack what the science says about common dog-walking gear and how each option affects comfort, safety, and behavior. The post A Review Of Restraints For Walking Your Dog appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Help Us Respond to an Uncertain Future for Global Health
    It has been a tumultuous year for global health. In early 2025, the US government cut billions of dollars in foreign aid, affecting millions of people around the world and creating substantial uncertainty that continues to ripple through health and development programs around the world.
    GiveWell | 3 days ago
  • Should we ban ugly buildings?
    Episode ten of the Works in Progress podcast is surprisingly NIMBY.
    The Works in Progress Newsletter | 3 days ago
  • Taking Jaggedness Seriously
    Why we should expect AI capabilities to keep being extremely uneven, and why that matters
    Rising Tide | 3 days ago
  • Grant Proposal Maker
    Hi All As a part of Amplify for Animals' AI Training Program, I created this custom GPT last week, and I have been working to improve it to the extent that organizations can actually use this. The tool can help you create grant proposals. Just share the funder's RFP or website and link to your project or detailed document regarding your project.
    Animal Advocacy Forum | 3 days ago
  • The truth about grazing in the U.S.
    Good morning Fast Community,. Our latest blog post on Inside Animal Ag explores the “debate” about grazing’s impacts on U.S. land degradation: “The beef industry and its supporters have kept alive the notion that there is some controversy about whether grazing cattle in America is beneficial for the land.
    Animal Advocacy Forum | 3 days ago
  • Animal rights advocate elected as town assembly member in Japan
    Dear FAST,. Some good news from Japan!. Midori Meguro, long time animal advocate and executive director of Lib (one of Japan’s few animal rights organizations) just got elected as a town assembly member in Kiso.
    Animal Advocacy Forum | 3 days ago
  • Sentient Futures Summit (SFS) Bay Area 2026
    Sentient Futures Summit (SFS) Bay Area 2026 is a three-day conference taking place February 6-8th, to explore the intersection of AI and sentient non-humans – both biological (i.e. animals) and potentially artificial. Register here for an Early Bird 30% discount before December 2nd!.
    Animal Advocacy Forum | 3 days ago
  • Sentient Futures Summit (SFS) Bay Area 2026
    Sentient Futures Summit (SFS) Bay Area 2026 is a three-day conference taking place February 6-8th, to explore the intersection of AI and sentient non-humans – both biological (i.e. animals) and potentially artificial. Register here for an Early Bird 30% discount here before December 2nd!.
    Animal Advocacy Forum | 3 days ago
  • Destiny Interviews Saar Wilf: The Case for COVID-19 Lab Origin
    Rootclaim founder Saar Wilf joined Destiny to discuss the recent and strongest evidence yet for COVID-19’s lab origin. Since our debate with Peter Miller, new findings have emerged that strongly support the lab-origin hypothesis. In this interview, Saar walks through the data, explains Rootclaim’s probability-based analysis, and answers Destiny’s questions.
    Rootclaim | 3 days ago
  • Import AI 436: Another 2GW datacenter; why regulation is scary; how to fight a superintelligence
    Is AI balkanization measurable?
    Import AI | 3 days ago
  • Open Thread 409
    Astral Codex Ten | 3 days ago
  • ATVBT Year-End Book-Recs
    A blog's recs should exceed its asks, or what's a heaven for?
    Atoms vs Bits | 3 days ago
  • Meet the Candidates: Donation Election 2025
    The Donation Election has begun!. Three important links: The voting portal (voting is open now!). How to vote, and rules for voting. The Donation Election Fund. This post introduces our candidates. We’re splitting by cause area for easy reading, but in the election they are all competing against each other.
    Effective Altruism Forum | 3 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.