Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Supervillain Monologues Are Unrealistic
    Supervillain monologues are strange. Not because a supervillain telling someone their evil plan is weird. In fact, that's what we should actually expect. No, the weird bit is that people love a good monologue. Wait, what?. OK, so why should we expect supervillains to tell us their master plan?
    LessWrong | 30 minutes ago
  • LLM-generated text is not testimony
    Crosspost from my blog. Synopsis. When we share words with each other, we don't only care about the words themselves. We care also—even primarily—about the mental elements of the human mind/agency that produced the words. What we want to engage with is those mental elements. As of 2025, LLM text does not have those elements behind it.
    LessWrong | 32 minutes ago
  • Boletín de noviembre de 2025
    🚀 Las últimas novedades de la comunidad de AE...
    Altruismo eficaz | 5 hours ago
  • No Need for Explanation
    Here, I explain why Phenomenal Conservatism is better than Phenomenal Explanationism. *
    Fake Nous | 5 hours ago
  • A Very Disturbing Moral Argument
    How morality might be inverted
    Bentham's Newsletter | 5 hours ago
  • #84 – Dean Spears on the Case for People
    Dean Spears is an an Economic Demographer, Development Economist, and Associate Professor of Economics at the University of Texas at Austin. With Michael Geruso, Dean is the co-author of After the Spike: Population, Progress, and the Case for People. You can see a full transcript and a list of resources on the episode page on our website. We're back from a hiatus!
    Hear This Idea | 7 hours ago
  • Factory Farming is a Blight
    The practices of industrialized animal farming are aesthetically and morally revolting. These practices can be phased out. The post Factory Farming is a Blight appeared first on Palladium.
    Palladium Magazine Newsletter | 7 hours ago
  • The Ozempic effect is finally showing up in obesity data
    For years, obesity rates in the US have gone in one direction: up. From the first year it was launched, Gallup’s National Health and Well-Being Index has found that the share of US adults reporting obesity has climbed and climbed, rising from 25.5 percent in 2008 to 39.9 percent in 2022. That survey caught the […]...
    Future Perfect | 8 hours ago
  • Why you should write blog posts and not be a blogger
    I am doing Inkhaven this month, which means that (if all goes well) you’ll get to hear from me every day for the next 30 days.
    Thing of Things | 8 hours ago
  • Helping people spot misinformation
    And the greatest gift psychology gave the world
    Reasonable People | 11 hours ago
  • Every Forum Post on EA Career Choice & Job Search
    TLDR: I went through the entirety of the career choice, career advising, career framework, career capital, working at EA vs. non-EA orgs, and personal fit tags and have filtered to arrive at a list of all posts relevant to figuring out the career aspect of the EA journey up until now (10/25/25).
    Effective Altruism Forum | 14 hours ago
  • Face à l'IA et la Désinformation - avec Tim (Bienfaits Pour Tous)
    ⚠️ Découvrez du contenu EXCLUSIF (pas sur la chaîne) ⚠️ ⇒ https://the-flares.com/y/bonus/ ⬇️⬇️⬇️ Infos complémentaires : sources, références, liens... ⬇️⬇️⬇️ Le contenu vous intéresse ? Abonnez-vous et cliquez sur la 🔔 Vous avez aimé cette vidéo ? Pensez à mettre un 👍 et à la partager.
    The Flares | 14 hours ago
  • Can AI systems introspect?
    A reflection on a selection of results by the exceptional Jack Lindsey
    Experience Machines | 15 hours ago
  • On keeping a packed suitcase
    This Halloween, I didn’t need anything special to frighten me. I walked all day around in a haze of fear and depression, unable to concentrate on my research or anything else. I saw people smiling, dressed up in costumes, and I thought: how? The president of the Heritage Foundation, the most important right-wing think tank […]...
    Shtetl-Optimized | 16 hours ago
  • Anthropic's Pilot Sabotage Risk Report
    As practice for potential future Responsible Scaling Policy obligations, we're releasing a report on misalignment risk posed by our deployed models as of Summer 2025. We conclude that there is very low, but not fully negligible, risk of misaligned autonomous actions that substantially contribute to later catastrophic outcomes.
    LessWrong | 20 hours ago
  • The Epoch AI Brief - October 2025
    Report on decentralized training, new Epoch Capabilities Index for tracking AI progress, FrontierMath evaluations of leading models, revenue insights on OpenAI, and hiring for two open positions.
    Epoch Newsletter | 24 hours ago
  • AI, Animals & Digital Minds NYC 2025: Retrospective
    Our Mission: Rapidly scale up the size and influence of the community trying to make AI and other transformative technologies go well for sentient nonhumans. One of the key ways we do this is through our events. This article gives insight into our most recent event, AI, Animals and Digital Minds NYC 2025 including: Lightning talks. Topics and ideas covered. Attendee feedback.
    Effective Altruism Forum | 1 days ago
  • Talk to the City Case Study: Amplifying Youth Voices in Australia
    YWA deployed Talk to the City (T3C) as one of their research platforms, introducing an innovative interface that bridged the gap between large-scale data collection and qualitative insight. The platform's key innovation lay in its interactive visualization interface, which allowed researchers to dynamically explore relationships between different data points while maintaining direct...
    AI Objectives Institute | 1 days ago
  • Animal Equality updates & vacancies - October 2025
    Hello everyone! Here you can find the October updates from Animal Equality. We hope that you find it helpful and inspiring. We also include our current job openings. Thank you!. Global. Animal Equality concluded seven weeks of protests in the Netherlands against grocery giant Ahold Delhaize, demanding an end to the use of cages for hens in its U.S. supply chain.
    Animal Advocacy Forum | 1 days ago
  • Good tokens 2025-10-31
    Spooky tokens
    Atoms vs Bits | 1 days ago
  • The Rise and Fall of Nvidia’s Geopolitical Strategy
    China’s Cyberspace Administration last month banned companies from purchasing Nvidia’s H20 chips, much to the chagrin of its CEO Jensen Huang. This followed a train wreck of events that unfolded over the summer. The post The Rise and Fall of Nvidia’s Geopolitical Strategy appeared first on AI Now Institute.
    AI Now Institute | 1 days ago
  • Want to grow your group’s impact? Apply to the Organiser Support Programme (OSP)!
    Want to grow your group’s impact? Apply to the Organiser Support Programme (OSP)! Applications have opened for CEA’s mentorship program, the online conference EA Connect is happening soon, and other opportunities! Subscribe - Unsubscribe - Newsletter Archives - View in your browser Effective Altruism South Africa's annual...
    EA Groups Newsletter | 1 days ago
  • Newsletter #4
    More useful AI stuff.
    Peter Hartree | 1 days ago
  • From Open Science to AI: Benchmarking LLMs on Reproducibility, Robustness, and Replication
    At the Center for Open Science (COS), our work is about making research more transparent, rigorous, and verifiable. As AI tools enter the research workflow, we need evidence about what they actually contribute to scientific credibility.
    Center for Open Science | 1 days ago
  • Open positions to grow our podcast team
    The post Open positions to grow our podcast team appeared first on 80,000 Hours.
    80,000 Hours | 1 days ago
  • FAQ: Expert Survey on Progress in AI methodology
    Context
    AI Impacts | 1 days ago
  • The markets aren’t bracing for an AI crash — yet
    Transformer Weekly: Blackwell chips and China, White House warning to pro-AI super pac, and OpenAI’s restructure...
    Transformer | 1 days ago
  • Linkpost for October
    Effective Altruism
    Thing of Things | 1 days ago
  • How Much Should We Spend on Scientific Replication?
    A data-driven framework for targeting replication funding where it matters most
    Institute for Progress | 1 days ago
  • What You Need to Refute Arguments With Astronomical Stakes
    A merely pretty good rebuttal isn't enough
    Bentham's Newsletter | 1 days ago
  • Third Latin American Congress of Animal Law, Arba 2025
    Hello FAST community. I hope you're all doing great!. We would like to invite you to register for our Third Latin American Congress of Animal Law, Arba 2025, which will be held virtually on Thursday, November 13, from 5:00 p.m. to 9:00 p.m. (Peruvian time). We believe that holding this third congress represents a significant achievement for the region.
    Animal Advocacy Forum | 1 days ago
  • Sarah Paine – How Russia sabotaged China's rise
    Plus, where Russia and China go from here
    The Lunar Society | 1 days ago
  • Animal Welfare Legislation In The European Union: A Call For Consistency
    Across the European Union, member states have taken different approaches to animal welfare legislation, causing gaps in protection that leave farmed animals vulnerable to harmful systems and practices. The post Animal Welfare Legislation In The European Union: A Call For Consistency appeared first on Faunalytics.
    Faunalytics | 1 days ago
  • Compassion in World Farming is Hiring: Senior Digital Campaigns Coordinator
    Compassion in World Farming International is a global movement transforming the future of food and farming. We’re recruiting for a creative and driven part-time Senior Digital Campaigns Coordinator to help us mobilise public support and influence decision-makers through compelling digital campaigns. .
    Animal Advocacy Forum | 1 days ago
  • How health is financed in LMICs—and why it matters for doing good | Peter Koziol | EAG London: 2025
    How health is financed in low- and middle-income countries (LMICs) determines the effectiveness and reach of global health efforts. In this talk, Peter Koziol explores the key dynamics of health financing—from the scale of different funding sources and the roles of major actors, to how these factors impact specific diseases—revealing what this means for those aiming to maximize their positive...
    Centre for Effective Altruism | 1 days ago
  • Holden Karnofsky on dozens of amazing opportunities to make AI safer — and all his AGI takes
    By Robert Wiblin | Watch on Youtube | Listen on Spotify | Read transcript. Episode summary. Whatever your skills are, whatever your interests are, we’re out of the world where you have to be a conceptual self-starter, theorist mathematician, or a policy person — we’re into the world where whatever your skills are, there is probably a way to use them in a way that is helping make maybe...
    Effective Altruism Forum | 1 days ago
  • Incofin and GAIN expand nutrition-focused investments in East Africa’s dairy sector
    Incofin and GAIN expand nutrition-focused investments in East Africa’s dairy sector gloireri Fri, 10/31/2025 - 14:16 . Incofin Investment Management and the Global Alliance for Improved Nutrition (GAIN), through the fund ‘Nutritious Foods Financing Facility (N3F)’, announce two new investments in East Africa’s dairy sector: Mujuni Ventures Limited in Uganda and Narumoro Dairy in Kenya.
    Global Alliance for Improved Nutrition | 1 days ago
  • The challenge of creating brains in a lab
    They’re growing miniature 3D brains from stem cells. These aren’t your fictional mad scientists’ brains in a vat; they’re organoids, and they grow in petri dishes. They’re also incredibly cool. We can, should, and will use cerebral organoids to discover new medical treatments, study brain development, reduce the demand for animal testing, and even power […]...
    Future Perfect | 1 days ago
  • The myth of the carnivore caveman
    Across the far right, a paranoid prophecy has been taking hold: the belief that globalist elites want to take meat off the menu and replace it with insects. The charge has been spouted in one version or another by provocateurs like Tucker Carlson, Mike Cernovich, and Jordan Peterson, and repeated by countless accounts on social […]...
    Future Perfect | 1 days ago
  • HSA welcomes recommendation from the UK Animal Welfare Committee calling for a ban on the use of CO2 to stun pigs
    We are delighted to see a strong recommendation from the UK Animal Welfare Committee (AWC) which calls for a ban on the use of CO₂ to stun pigs. The report, to which we contributed evidence, echoes our calls for a ban with a phase-out period that should be as short as possible and within five years.
    Humane Slaughter Association | 1 days ago
  • Things I read and liked in October
    Halloween, human enhancement, heartbreak, hermeneutics
    Experience Machines | 2 days ago
  • Resampling Conserves Redundancy & Mediation (Approximately) Under the Jensen-Shannon Divergence
    Around two months ago, John and I published Resampling Conserves Redundancy (Approximately). Fortunately, about two weeks ago, Jeremy Gillen and Alfred Harwood showed us that we were wrong. This proof achieves, using the Jensen-Shannon divergence ("JS"), what the previous one failed to show using KL divergence ("DKL").
    LessWrong | 2 days ago
  • Anthropic's Pilot Sabotage Risk Report
    As practice for potential future Responsible Scaling Policy obligations, we're releasing a report on misalignment risk posed by our deployed models as of Summer 2025. We conclude that there is very low, but not fully negligible, risk of misaligned autonomous actions that substantially contribute to later catastrophic outcomes.
    AI Alignment Forum | 2 days ago
  • SpaceX vs Blue Origin 🚀, Android always-on apps 📱, design tokens 🎨
    TLDR AI | 2 days ago
  • The Diffusion Dilemma
    Technological progress is often equated with invention, yet history shows that invention alone rarely transforms society without effective diffusion. This paper examines the persistent “diffusion dilemma,” the lag between the emergence of general-purpose technologies and their broad, productive adoption.
    AI Objectives Institute | 2 days ago
  • Sonnet 4.5's eval gaming seriously undermines alignment evals, and this seems caused by training on alignment evals
    According to the Sonnet 4.5 system card, Sonnet 4.5 is much more likely than Sonnet 4 to mention in its chain-of-thought that it thinks it is being evaluated; this seems to meaningfully cause it to appear to behave better in alignment evaluations.
    LessWrong | 2 days ago
  • Sonnet 4.5's eval gaming seriously undermines alignment evals, and this seems caused by training on alignment evals
    According to the Sonnet 4.5 system card, Sonnet 4.5 is much more likely than Sonnet 4 to mention in its chain-of-thought that it thinks it is being evaluated; this seems to meaningfully cause it to appear to behave better in alignment evaluations.
    AI Alignment Forum | 2 days ago
  • Steering Evaluation-Aware Models to Act Like They Are Deployed
    🐦 Tweet thread, 📄 Paper, 🖥️ Code, 🤖 Evaluation Aware Model Organism. TL, DR:;. We train an evaluation-aware LLM. Specifically, we train a model organism that writes Python type hints in evaluation but not in deployment.
    AI Alignment Forum | 2 days ago
  • Steering Evaluation-Aware Models to Act Like They Are Deployed
    🐦 Tweet thread, 📄 Paper, 🖥️ Code, 🤖 Evaluation Aware Model Organism. TL, DR:;. We train an evaluation-aware LLM. Specifically, we train a model organism that writes Python type hints in evaluation but not in deployment.
    LessWrong | 2 days ago
  • Support Metaculus' First Animal-Focused Forecasting Tournament
    I'm putting together Metaculus' first animal-focused forecasting tournament, and am reaching out to ask for your support in making this happen. What is it?. This tournament will generate probabilistic forecasts on decision-relevant questions affecting animals, from alternative proteins and animal welfare policy to AI impacts and wild animal welfare.
    Effective Altruism Forum | 2 days ago
  • What giving people money doesn’t fix
    GiveDirectly has written a lot about what giving people money is consistently very good at (e.g., recipients earn more, spend more, own more assets, and the local economy gets a boost), the myths that aren’t true (e.g, cash doesn’t make people work less or drink more), and the fact that it’s what most recipients prefer. […]...
    GiveDirectly | 2 days ago
  • Mis conclusiones sobre IA 2027
    La transición a la superinteligencia puede ser un proceso rápido, «solo de software», que podría producirse en un solo año debido al progreso algorítmico compuesto, independiente de las limitaciones de poder de cómputo. Este rápido despegue crea un período de intensa inestabilidad geopolítica, ya que la primera nación en alcanzar la superinteligencia obtiene una ventaja estratégica decisiva,...
    Altruismo Eficaz | 2 days ago
  • Beyond the Spreadsheets: Malawi Site Visit Podcast Series
    Our Beyond the Spreadsheets podcast mini-series lets you ride along with our leadership team on their recent weeklong site visit to Malawi. Recorded daily during the trip, the series shares the behind-the-scenes experience of a GiveWell site visit through real-time reflections and clips of conversations.
    GiveWell | 2 days ago
  • AI + Math Chat #2 - Formalizing Proofs & Breaking Codes
    “Can LLMs go beyond pattern-matching to produce fully formalized mathematical proofs and novel cryptographic attacks?” w. Kevin Buzzard, Alex Kontorovich, Kristin Lauter, Yang-Hui He.
    Epoch Newsletter | 2 days ago
  • AI + Math Chat #1 - Thinking smarter not harder
    “How close are today’s large-language models to genuine mathematical creativity?” with mathematicians Richard Borcherds, Sergei Gukov, Daniel Litt, and Ken Ono (originally uploaded Aug 18, 2025).
    Epoch Newsletter | 2 days ago
  • The End of OpenAI’s Nonprofit Era
    Key regulators have agreed to let the company kill its profit caps and restructure as a for-profit — with some strings attached. This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence.
    Effective Altruism Forum | 2 days ago
  • [CAREERS] Have you been wanting to work for The Humane League? Now is your chance!
    Global Corporate Relations Lead. 🐓About the role 🐓. As Global Corporate Relations Lead, you will be part of a small, high-impact team that specializes in advancing the welfare of animals raised for food through outreach to major international food companies.
    Animal Advocacy Forum | 2 days ago
  • Preprints in Action: Advancing Open Science Across Communication Sciences and Disorders
    When researchers Danika Pfeiffer, Austin Thompson, Alisa Baron, Collin Brice, Brittany Ciullo, Micah E. Hirsch, Helen Long, and Andrea Ford posted their latest study on EdArXiv, a preprint server hosted on the Open Science Framework (OSF), they didn’t expect to inspire new research almost immediately.
    Center for Open Science | 2 days ago
  • We may never get bird flu — or egg prices — under control
    It might now be a distant memory, but by the end of last winter, the average cost of a dozen eggs soared to a record high of $6.23. (It’s now at $3.49.). The cause was H5N1, a highly pathogenic strain of avian influenza — or bird flu — that wild birds shed near farms as […]...
    Future Perfect | 2 days ago
  • Q&A with Maris Vainre: Job Satisfaction and the Power of Preprints
    What happens when researchers make their findings openly available before the traditional peer-review and publication process? For Maris Vainre, PhD, and her co-authors, sharing their study as a preprint led to media attention from outlets like New Scientist, generated over 1,800 downloads in several months, and sparked invitations to share their findings with audiences in and beyond academia.
    Center for Open Science | 2 days ago
  • Holden Karnofsky on dozens of amazing opportunities to make AI safer — and all his AGI takes
    The post Holden Karnofsky on dozens of amazing opportunities to make AI safer — and all his AGI takes appeared first on 80,000 Hours.
    80,000 Hours | 2 days ago
  • More than 30,000 petition signatures delivered to Dairy Management Inc. headquarters
    Supporters demand transparency and an end to routine cruelty in the dairy industry ROSEMONT, Ill. — On October 29, Mercy For Animals hand-delivered thousands of petition signatures to the headquarters of Dairy Management Inc., urging the corporation to stop whitewashing the routine cruelty behind dairy products. The action follows a recent undercover investigation at a […].
    Mercy for Animals | 2 days ago
  • Japan’s unusual approach to AI policy
    The country’s penalty-free AI legislation relies on social pressure and voluntary compliance — and experts say it could work elsewhere...
    Transformer | 2 days ago
  • Food Systems NDC Scorecard reveals opportunities for increased ambition in climate action ahead of COP30
    New international assessments showcase where national climate plans succeed and fall short on food systems, with Kenya, Somalia, Switzerland and the UK leading the way while others lag. LOS ANGELES — As countries prepare for COP30 in Belém, the Food Systems NDC Scorecard presents the first independent evaluation of how countries around the world integrate […].
    Mercy for Animals | 2 days ago
  • The CGD Podcast: Where AI Meets Development with Temina Madon and Han Sheng Chia
    From chatbots supporting new mothers and nutrition coaches guiding families, to tutoring tools for children and apps advising farmers, artificial intelligence is beginning to find its place in development. But is it ready—and are we?.
    CGD Podcast | 2 days ago
  • The Ethics Of Using AI To Talk To Animals
    For as long as humans have been able to imagine other minds, we’ve wanted to speak to animals. With the growing capabilities of artificial intelligence models, scientists believe we may only be a few years away from a world where we talk to animals — and they talk back. The post The Ethics Of Using AI To Talk To Animals appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Some surprising hiring practices I follow (as a hiring manager and grantmaker in EA)
    The best approach to take to hiring differs by industry. This means that best practice differs across startups, think tanks, video production, non-profits, academia, and news outlets will all have different practices. The practices that work best in the EA ecosystem will be different again – but unfortunately the effective altruism organisation landscape is much smaller than all of those,...
    Effective Altruism Forum | 2 days ago
  • Cultivated meat research just got a major boost. Here’s why it matters.
    GFI and Tufts University accelerate progress with new open-access cultivated meat cell lines and culture media formulations that can save researchers time and money.
    Good Food Institute | 2 days ago
  • Building a Sustainable Path to NTD Elimination: Angola’s Movement to Strengthen Health Systems
    Angola Building a Sustainable Path to NTD Elimination: Angola’s Movement to Strengthen Health Systems – October 30, 2025 Angola is charting a new course in.... The post Building a Sustainable Path to NTD Elimination: Angola’s Movement to Strengthen Health Systems appeared first on The END Fund.
    The END Fund | 2 days ago
  • Thing of Things Posts Index
    Substack archives are hard to search, which makes me sad because my old posts are full of incredible bangers.
    Thing of Things | 2 days ago
  • Sonnet 4.5's eval gaming seriously undermines alignment evals
    And this seems caused by training on alignment evals.
    Redwood Research | 2 days ago
  • How to Use AI for Quality, Not Just for Speed
    "Too much of the narrative is around it being an all knowing Oracle which hinders how we think about it." "Slow down, break it down into small steps, think about it, really engage, really bring your full self." "Work together for quality, not just for speed."
    Future of Life Institute | 2 days ago
  • Crustacean Compassion reach boiling point – ban boiling crabs and lobsters alive!
    Crustacean Compassion hosted a meet outside of Parliament with other animal advocates to urgently call on Defra to ban boiling alive.
    Crustacean Compassion | 2 days ago
  • Links For October 2025
    Astral Codex Ten | 2 days ago
  • Resolving radical cluelessness with metanormative bracketing
    (This post will be much easier to follow if you’ve first read either “Should you go with your best guess?” or “The challenge of unawareness for impartial altruist action guidance”. But it’s meant to be self-contained, assuming the reader is familiar with the basic idea of cluelessness.).
    Effective Altruism Forum | 2 days ago
  • Study: Giving people money returned 2.5x the value without causing inflation
    Last year, GiveWell revised its estimate of our cost-effectiveness, rating our work 3–4x higher than before. That shift was driven in part by evidence on how cash transfers affect local economies. In this post, we break down that evidence, explaining how cash boosts local economies. Summary:
    Effective Altruism Forum | 3 days ago
  • ImpossibleBench: Measuring Reward Hacking in LLM Coding Agents
    This is a post about our recent work ImpossibleBench: Measuring LLMs' Propensity of Exploiting Test Cases (with Aditi Raghunathan, Nicholas Carlini) where we derive impossible benchmarks from existing benchmarks to measure reward hacking. Figure 1: Overview of the ImpossibleBench framework.
    LessWrong | 3 days ago
  • L'Alignement est Pre-Paradigmatique | EXTRAIT PODCAST
    L'épisode complet : https://youtu.be/WexyMWLVvX0
    The Flares | 3 days ago
  • Emergent Introspective Awareness in Large Language Models
    New Anthropic research (tweet, blog post, paper): We investigate whether large language models can introspect on their internal states. It is difficult to answer this question through conversation alone, as genuine introspection cannot be distinguished from confabulations.
    LessWrong | 3 days ago
  • Some data from LeelaPieceOdds
    I've been curious about how good LeelaPieceOdds is, so I downloaded a bunch of data and graphed it. For context, Leela is a chess bot and this version of it has been trained to play with a handicap. This is BBNN odds, meaning Leela starts without bishops and knights. I first heard about LeelaQueenOdds from simplegeometry's comment, and I've been playing against it occasionally since then.
    LessWrong | 3 days ago
  • Subasta de arte a favor de Ayuda Efectiva
    Ayuda Efectiva | 3 days ago
  • OpenAI explores $1T IPO 💰, Nvidia hits $5T 📈, Chrome Writer API 📝
    TLDR AI | 3 days ago
  • OpenManifold
    Forecasting AI's leading firm
    Manifold Markets | 3 days ago
  • An Opinionated Guide to Privacy Despite Authoritarianism
    I've created a highly specific and actionable privacy guide, sorted by importance and venturing several layers deep into the privacy iceberg. I start with the basics (password manager) but also cover the obscure (dodging the millions of Bluetooth tracking beacons which extend from stores to traffic lights; anti-stingray settings; flashing GrapheneOS on a Pixel).
    LessWrong | 3 days ago
  • The best way to help Hurricane Melissa survivors may not be what you think
    We’re making this story accessible to all readers as a public service. Learn more about how to support our work. Hurricane Melissa plowed through the Caribbean on Tuesday as an enormous Category 5 storm, knocking out power lines, flooding hospitals, and killing dozens of people in its path. Already, the damage has been catastrophic. In […]...
    Future Perfect | 3 days ago
  • EA Survey 2024: Community Satisfaction, Retention & Mental Health
    Summary: Overall satisfaction with the EA community remains high (7.11/10).Satisfaction is slightly (though non-significantly) lower than in 2020 (7.26) and 2022 (7.16). People who joined EA more recently report higher satisfaction than those who joined earlier. Respondents identifying as men report higher satisfaction (7.24) than those who do not (6.88).
    Effective Altruism Forum | 3 days ago
  • What is Civil Society, and why should we care?: Farrell on Gellner on the conditions of liberty
    Editors’ Note: This post from Henry Farrell originally appeared on his Substack, Programmable Mutter. There are many possible stories about why American political conservatism is such an intellectual trainwreck. Here’s one. Conservatives used at least nominally to argue that it was important to protect civil society from the depredations of government, and many genuinely believed … Continue...
    HistPhil | 3 days ago
  • Safe Water Projects: Saving Lives and Improving Our Grantmaking
    Clean water. Most of us take it for granted. We can turn on the tap and have safe water to drink whenever we want, without having to give it a second thought. That’s not the case for more than a billion people around the world who lack access to uncontaminated drinking water.
    GiveWell | 3 days ago
  • “22 Minutes”: Global campaign highlights the suffering of fish and invites people to celebrate a cruelty-free Halloween
    Animal Law Focus, with the support of the Aquatic Animal Alliance and the communications development of Marketing Vegano, launched the global campaign #22Minutes, an initiative that seeks to debunk one of the most persistent myths in the debate about animals: that fish do not feel pain. Every year, billions of fish are slaughtered without legal recognition of their capacity to suffer.
    Animal Advocacy Forum | 3 days ago
  • From eggs to spuds: Why a cage-free shortfall is a wake-up call for potato sustainability claims
    The post From eggs to spuds: Why a cage-free shortfall is a wake-up call for potato sustainability claims appeared first on Mercy For Animals.
    Mercy for Animals | 3 days ago
  • Have We Reached the Pinnacle of Human Intelligence?
    "Because that's what's killing us is the mindset. Not even the reality. It's like the fear and delusion." "What makes us think that we reach the pinnacle of human intelligence when, if we look 200 years ago, there's such a massive gap?" "We're nowhere near our capabilities."
    Future of Life Institute | 3 days ago
  • Moving this substack
    I’m moving this substack to nosologist.substack.com so I can manage my two substacks from the same email account.
    John Halstead | 3 days ago
  • AI is probably not a bubble
    AI companies have revenue, demand, and paths to immense value
    The Power Law | 3 days ago
  • Mentorship opportunity, NatGeo funding, creating impact beyond nonprofits
    Your farmed animal advocacy update for late October 2025
    Hive | 3 days ago
  • Audits, not essays: How to win trust for enterprise AI
    Opinion: Alexandru Voica argues that application-layer AI companies are best off opening themselves up to rigorous testing rather than opining on AI safety
    Transformer | 3 days ago
  • AI Safety Newsletter #65: Measuring Automation and Superintelligence Moratorium Letter
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
    AI Safety Newsletter | 3 days ago
  • The Simplest Argument For Veganism
    It would be immoral to run a factory farm so it's immoral to buy from one
    Bentham's Newsletter | 3 days ago
  • Inside The Meat Industry’s Backlash To EAT-Lancet
    The meat industry orchestrated a coordinated online campaign to discredit the landmark 2019 EAT-Lancet planetary health diet. The post Inside The Meat Industry’s Backlash To EAT-Lancet appeared first on Faunalytics.
    Faunalytics | 3 days ago
  • New Roots Institute is hiring-Manager of Community Building
    Empower the Next Generation to End Factory Farming. What would it mean to dedicate your time, talent, and energy to creating a more just and sustainable food system?. New Roots Institute is a growing nonprofit dedicated to ending factory farming by empowering the next generation of advocates.
    Animal Advocacy Forum | 3 days ago
  • Will AI solve medicine?
    Some say AI will solve medicine within a decade. Others believe biology is far more complex than people imagine and AI will hit the limits of clinical trials and economics. Who's right?
    The Works in Progress Newsletter | 3 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.