Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • OpenAI: The nonprofit refuses to die (with Tyler Whitmer)
    The post OpenAI: The nonprofit refuses to die (with Tyler Whitmer) appeared first on 80,000 Hours.
    80,000 Hours | 2 hours ago
  • How Continental Philosophers "Argue"
    On the unseriousness of the discipline
    Bentham's Newsletter | 3 hours ago
  • AI doesn’t need to be general to be dangerous
    There’s more to AI safety than the AGI debate...
    Transformer | 3 hours ago
  • Education And Taste-Testing Show Promise For Promoting Cultivated Meat
    Although Singapore was the first country to approve cultivated meat for human consumption, public acceptance remains a challenge. Researchers investigated whether education and taste-testing could help change this. The post Education And Taste-Testing Show Promise For Promoting Cultivated Meat appeared first on Faunalytics.
    Faunalytics | 3 hours ago
  • Cause prio cruxes in 2026?
    The cause prioritization landscape in EA is changing. Focus has shifted away from evaluation of general cause areas or cross-cause comparisons, with the vast majority of research now comparing interventions within particular cause areas. Artificial Intelligence does not comfortably fit into any of the traditional cause buckets of Global Health, Animal Welfare, and Existential Risk.
    Effective Altruism Forum | 3 hours ago
  • Mizoram minister launches project to tackle violence against women
    Mizoram minister launches project to tackle violence against women Mizoram social welfare, women, and child development minister inaugurated the Gender Based Violence Solve project, jointly taken up with J-PAL South Asia. spriyabalasubr… Tue, 11/11/2025 - 10:40...
    J-PAL | 3 hours ago
  • Recipe: Heartbreak Pie
    I am presenting you with my recipe for Heartbreak Pie.
    Thing of Things | 4 hours ago
  • Show Notes: Eleos on Bloomberg's Odd Lots
    Corporate governance, phenomenal consciousness, and stinky stuff
    Eleos AI | 4 hours ago
  • The bloodshed in Sudan is visible from space
    The carnage in the Sudanese city of El Fasher has become so severe that the blood stains can be seen from space. The paramilitary Rapid Support Forces (RSF) — which attacked the capital of Khartoum two years ago, kicking off a brutal civil war — finally took over El Fasher last week. The RSF’s capture […]...
    Future Perfect | 6 hours ago
  • How likely is dangerous AI in the short term?
    How large of a breakthrough is necessary for dangerous AI?. In order to cause a catastrophe, an AI system would need to be very competent at agentic tasks . The best metric of general agentic capabilities is METR’s time horizon.
    LessWrong | 10 hours ago
  • GAIN’S Strategy For Harnessing Artificial Intelligence In Programmes
    GAIN’S Strategy For Harnessing Artificial Intelligence In Programmes GAIN 🇰🇪 on Socials: . . . . gloireri Tue, 11/11/2025 - 09:33 GAIN’S Strategy For Harnessing Artificial Intelligence In Programmes . Introduction and Overview. Billions of people worldwide are malnourished. Food systems transformation is essential to address this challenge, yet it is not happening fast enough.
    Global Alliance for Improved Nutrition | 11 hours ago
  • Are Groot and Baby Groot the Same Person?
    This post contains spoilers for Guardians of the Galaxy. At the end of Guardians of the Galaxy, Groot—a sapient tree with a three-word vocabulary—dies. They take a splinter from his…trunk, I guess?…and put it in a pot, from which springs Baby Groot. There was a debate among fans as to whether Baby Groot is Groot regenerated, or if Baby Groot is an entirely new person.
    Philosophical Multicore | 11 hours ago
  • Changes to the 80k podcast
    This is a brief update about what’s been happening on the 80k podcast team over the last 6 months, because we’ve undergone quite a few changes. We’re also hiring (more details below). As background: The podcast has been running for 8(!) years, run and predominantly hosted by Rob Wiblin.
    Effective Altruism Forum | 12 hours ago
  • GAIN’S Strategy For Harnessing Artificial Intelligence In Programmes
    GAIN’S Strategy For Harnessing Artificial Intelligence In Programmes GAIN’S Strategy For Harnessing Artificial Intelligence In Programmes . gloireri Tue, 11/11/2025 - 06:34 Image Thumb (540x337px) Share on...
    Global Alliance for Improved Nutrition | 14 hours ago
  • Andrej Karpathy on LLM cognitive deficits
    Excerpt from Dwarkesh Patel's interview with Andrej Karpathy that I think is valuable for LessWrong-ers to read. I think he's basically correct. Emphasis in bold is mine. Andrej Karpathy 00:29:53. I guess I built the repository over a period of a bit more than a month. I would say there are three major classes of how people interact with code right now.
    LessWrong | 15 hours ago
  • iPhone Air flops 📱, Anthropic OpenAI financials leak 💰, becoming a compiler engineer 👨‍💻
    TLDR AI | 19 hours ago
  • From Vitalik: Galaxy brain resistance
    I basically fully endorse the full article. I like the concluding bit too. This brings me to my own contribution to the already-full genre of recommendations for people who want to contribute to AI safety: Don't work for a company that's making frontier fully-autonomous AI capabilities progress even faster. Don't live in the San Francisco Bay Area. Cheers, Gabe. Discuss...
    LessWrong | 20 hours ago
  • October 2025 Updates
    Every month we send an email newsletter to our supporters sharing recent updates from our work. We publish selected portions of the newsletter on our blog to make this news more accessible to people who visit our website. For key updates from the latest installment, please see below!. If you’d like to receive the complete newsletter in your inbox each month, you can subscribe here. Read More.
    GiveWell | 22 hours ago
  • The Humane League is hiring!
    🐓About the role 🐓. As Corporate Relations Lead at The Humane League, you will play an integral role in our organizational strategy to engage with corporations and advance protections for animals in the food supply chain. You’re motivated to make a difference in the lives of animals and have the passion needed to encourage and inspire corporate decision makers to do the same.
    Animal Advocacy Forum | 22 hours ago
  • Policy as a High-Impact Career Path
    Policy shapes the rules and systems that affect how societies function—from climate regulations to public health programs to economic reforms. Because policy decisions often impact millions or even billions of people, careers in this field can be one of the most powerful ways to create positive change in the world. ... Read more...
    Probably Good | 23 hours ago
  • Manifest X DC Opening Benediction - Making Friends Along the Way
    Manifest X DC was this weekend, hopefully the first of many local spin-offs of Manifest. Despite a late prediction market surge, there were no fires. An attendee wrote a nice post of observations, though he left the afterparty before the lights dimmed and the duels began. I gave an opening talk, which ended up being more personal and emotional than I'd set out to write.
    LessWrong | 23 hours ago
  • Three Kinds Of Ontological Foundations
    Why does a water bottle seem like a natural chunk of physical stuff to think of as “A Thing”, while the left half of the water bottle seems like a less natural chunk of physical stuff to think of as “A Thing”? More abstractly: why do real-world agents favor some ontologies over others?.
    LessWrong | 23 hours ago
  • Book Announcement: The Gentle Romance
    It’s been eight months since I released my last story, so you could be forgiven for thinking that I’d given up on writing fiction. In fact, it’s the opposite. I’m excited to announce that I’m releasing my first fiction collection— The Gentle Romance: Stories of AI and Humanity—with Encour Press in mid-December!. (Cover design by Barış Şehri).
    LessWrong | 23 hours ago
  • Ontology for AI Cults and Cyborg Egregores
    TL;DR: If you already have clear concepts for memes, cyber memeplexes, egregores, the mutualism-parasitism spectrum and possession, skip. Otherwise, read on. I haven't found concepts useful for thinking about this: written in one place, so here is an ontology which I find useful. Prerequisite: Dennett three stances (physical, design, intentional). Meme is a replicator of cultural evolution.
    LessWrong | 23 hours ago
  • Ontology for AI Cults and Cyborg Egregores
    TL;DR: If you already have clear concepts for memes, cyber memeplexes, egregores, the mutualism-parasitism spectrum and possession, skip. Otherwise, read on. I haven't found concepts useful for thinking about this: written in one place, so here is an ontology which I find useful. Prerequisite: Dennett three stances (physical, design, intentional). Meme is a replicator of cultural evolution.
    AI Alignment Forum | 23 hours ago
  • Introducing LEAP: The Longitudinal Expert AI Panel
    Every month, the Forecasting Research Institute asks top computer scientists, economists, industry leaders, policy experts and superforecasters for their AI predictions. Here’s what we learned from the first three months of forecasts: AI is already reshaping labor markets, culture, science, and the economy—yet experts debate its value, risks, and how fast it will integrate into everyday life.
    Effective Altruism Forum | 23 hours ago
  • 🟩 Iran drought warning, rebels agree to Sudan ceasefire, Trump tariffs threatened || Global Risks Weekly Roundup #45/2025
    Executive summary
    Sentinel | 1 days ago
  • Build times for gigawatt-scale data centers can be 2 years or less
    Hyperscalers are coming in hot with the next generation of AI datacenters
    Epoch Newsletter | 1 days ago
  • Disney stopped being about romance and started being about trauma
    In the past decade or so, Disney/Pixar movies have undergone a really interesting change.
    Thing of Things | 1 days ago
  • Flourishing Futures and Anthropic
    Why Joe Carlsmith should join ISIS, or, barring that, Anthropic
    Bentham's Newsletter | 1 days ago
  • Open Thread 407
    Astral Codex Ten | 1 days ago
  • Crappy Cappuccinos: A Welfare Assessment Of Kopi Luwak Production
    A study reveals how civets suffer for the world’s “most luxurious” coffee. The post Crappy Cappuccinos: A Welfare Assessment Of Kopi Luwak Production appeared first on Faunalytics.
    Faunalytics | 1 days ago
  • We're hiring a policy reporter
    We’re looking for an all-star reporter to cover the full spectrum of US frontier AI policy and politics, including the White House, Congress, key states, and industry efforts to shape legislation.
    Transformer | 1 days ago
  • Visionary Pragmatism: A Third Way for Animal Advocacy
    This work is my own, written in my spare time, and doesn’t reflect the views of my employer. Less than ~3% of the text is AI-generated. Thank you to Laila Kassam, Haven King-Nobles, Lincoln Quirk, Tom Billington and Harley McDonald-Eckersall for their feedback, which doesn’t imply their endorsement of the ideas presented. Summary:
    Effective Altruism Forum | 1 days ago
  • Dispelling common misconceptions about sentience-centered ethics
    To give someone moral consideration means to avoid harming them and to seek their benefit. The criterion of sentience holds that we should give moral consideration to all beings capable of having experiences. Read more...
    Animal Ethics | 1 days ago
  • Import AI 434: Pragmatic AI personhood; SPACE COMPUTERS; and global government or human extinction;
    The future is biomechanical computation
    Import AI | 1 days ago
  • Seeking Radical Deontology
    Status quo harms should motivate reform
    Good Thoughts | 1 days ago
  • Holiday Gift Guide
    I got what you need
    Atoms vs Bits | 1 days ago
  • Your washing machine is actually a time machine
    If Good News had a patron saint, it would be the Swedish professor of global health Hans Rosling. Rosling, who died in 2017, was a wizard at using data and storytelling to challenge misconceptions around global development and progress. With statistics in hand, Rosling could convince the most determined pessimist that the world was, on […]...
    Future Perfect | 1 days ago
  • Myopia Mythology
    It's been a while since I wrote about myopia!. My previous posts about myopia were "a little crazy", because it's not this solid well-defined thing; it's a cluster of things which we're trying to form into a research program. This post will be "more crazy". The Good/Evil/Good Spectrum. "Good" means something along the lines of "helpful to all".
    LessWrong | 1 days ago
  • ChinAI #335: Rereading Stanford's 2025 AI Index
    Greetings from a world where…...
    ChinAI Newsletter | 1 days ago
  • Things you should know about London
    After I wrote my post about things you should know, a commenter suggested writing a list specifically about London, which I thought was a rather good idea.
    Samstack | 1 days ago
  • Learning information which is full of spiders
    This essay contains an examination of handling information which is unpleasant to learn. Also, more references to spiders than most people want. CW: Pictures of spiders. I. Litanies and Aspirations. If the box contains a diamond, I desire to believe that the box contains a diamond; If the box does not contain a diamond, I desire to believe that the box does not contain a diamond; Let me...
    LessWrong | 1 days ago
  • A Thesis Regarding The Impossibility Of Giving Accurate Time Estimates, Presented As An Experiment On Form In Which The Essay Solely Consists Of A Title; In Which The Thesis States That, If Task Times Follow A Pareto Distribution (With The Right Parameters), Then An Unknown Task Takes Infinite Time In Expectation; And Therefore, In The General Case, You Cannot Provide An Accurate Time Estimate Because Any Finite Estimate Provided Will Not Capture The Expected Value; And, More Precisely, Every Estimate Will Be An Underestimate, Because Every Number Is Smaller Than Infinity; And This Matches With The General Observation That, When People Estimate Task Times, They Usually Underestimate The True Time; However, In Opposition To This Thesis Are At Least Two Observations; First, That Even If Tasks Take Infinite Time In Expectation, The Median Task Time Is Finite, And An Infinite-Expected-Value Task-Time Distribution Does Not Preclude The Possibility That Time Estimates Can Overestimate As Often As They Underestimate, But People Fail To Do This; Second, That Certain Known Biases That Result In People Underestimating The Difficulty Of Tasks, Such As Envisioning The Best-Case Scenario Rather Than The Average Case; However, In Defense Of The Original Thesis, Optimism Bias And The Pareto-Distributed Problem Space May Be Two Perspectives On The Same Phenomenon; But Even If We Reconcile The Second Concern With The Thesis, We Are Still Left With The First Concern, In Which An Unbiased Estimate Of The Median Time Should Still Be Possible, But People Are Overly Optimistic About Median Task Times; Thus, Ultimately Concluding That The Thesis Of This Essay--Or, More Accurately, The Thesis Of This Title--Is A Faulty Explanation Of People's General Inability To Provide Accurate Time Estimates; Then Following Up This Thesis With The Additional Observation That We Can Model Tasks As Turing Machines; And The Halting Problem States That It Is Impossible In General To Say Whether A Turing Machine Will Halt, And As A Corollary, It Is Impossible In General To Predict How Long A Turing Machine Will Run For Even If It Does Halt; So Perhaps The Halting Problem Means That We Cannot Make Accurate Time Estimates In General; However, It Is Not Clear That The Sorts Of Tasks That Human Beings Estimate Are Sufficiently General For This Concern To Apply, And Indeed It Seems Not To Apply Because Some Subset Of People Do In Fact Succeed At Making Unbiased Time Estimates In At Least Some Situations, At Least Where 'Unbiased' Is Defined Relative To The Median Rather Than The Mean; It Is Difficult To Say In Which Real-Life Situations The Halting Problem Is Relevant Because It Is Not Feasible To Construct A Formal Mathematical Proof For Realistic Real-Life Situations Because This Would Require Creating A Sophisticated Model In Which The State Of The Universe Is Translated To A Turing Machine, Which Would Be An Extremely Large Turing Machine And Probably Not Feasible To Reason About; Leading To The Conclusion That This Essay's Speculation Led Nowhere
    Philosophical Multicore | 1 days ago
  • The two types of LLM preferences
    The standard approach to measure values or preferences of LLMs is to:
    AI Safety Takes | 1 days ago
  • Climate Change Does Not Only Threaten Food Production but Also Crop Nutrition
    Climate Change Does Not Only Threaten Food Production but Also Crop Nutrition gloireri Mon, 11/10/2025 - 08:31 Climate Change Does Not Only Threaten Food Production But Also Crop Nutrition. Authors: Michelle Nova Lauwrhetta, Ibnu Budiman. Indonesia: 7th/Nov/2025. .
    Global Alliance for Improved Nutrition | 1 days ago
  • How and Why to Make EA Cool
    EA has always had a coolness crisis. The name itself is clunky and overly precise. The logo is fine-but-not-great, and the visual branding has never excited anybody. EA orgs have spent over a decade struggling to tell exciting stories or get serious numbers of social media followers. In short: It’s never been sexy; it’s never been cool. Maybe you don’t think being cool matters.
    Effective Altruism Forum | 2 days ago
  • Problems I've Tried to Legibilize
    Looking back, it appears that much of my intellectual output could be described as legibilizing work, or trying to make certain problems in AI risk more legible to myself and others. I've organized the relevant posts and comments into the following list, which can also serve as a partial guide to problems that may need to be further legibilized, especially beyond LW/rationalists, to AI...
    Effective Altruism Forum | 2 days ago
  • Problems I've Tried to Legibilize
    Looking back, it appears that much of my intellectual output could be described as legibilizing work, or trying to make certain problems in AI risk more legible to myself and others. I've organized the relevant posts and comments into the following list, which can also serve as a partial guide to problems that may need to be further legibilized, especially beyond LW/rationalists, to AI...
    LessWrong | 2 days ago
  • Viable Paradise Review
    I recently spent six days at Viable Paradise, the yearly science fiction and fantasy writers’ workshop.
    Thing of Things | 2 days ago
  • Book Announcement
    It’s been eight months since I released my last story, so you could be forgiven for thinking that I’d given up on writing fiction.
    Narrative Ark | 2 days ago
  • Apple satellite features 🛰️, inside Cursor 👨‍💻, becoming full stack 💼
    TLDR AI | 2 days ago
  • Anxiety is one of the world’s most common health issues. How have treatments evolved over the last 70 years?
    Anxiety affects at least hundreds of millions of people every year. What treatments are available, and how have they changed over time?
    Our World in Data | 2 days ago
  • Nuevo club de lectura de Altruismo Racional
    Ayuda Efectiva | 2 days ago
  • Effective petitions (November 2025)
    We cannot only do the most good with our money by donating to top charities and with our professional time by pursuing high impact ethical careers, but also with our moments of spare time, simply by signing online petitions or … Lees verder →...
    The Rational Ethicist | 2 days ago
  • Myopia Mythology
    It's been a while since I wrote about myopia!. My previous posts about myopia were "a little crazy", because it's not this solid well-defined thing; it's a cluster of things which we're trying to form into a research program. This post will be "more crazy". The Good/Evil/Good Spectrum. "Good" means something along the lines of "helpful to all".
    AI Alignment Forum | 2 days ago
  • Condensation
    Condensation: a theory of concepts is a model of concept-formation by Sam Eisenstat. Its goals and methods resemble John Wentworth's natural abstractions/ natural latents research. Both theories seek to provide a clear picture of how to posit latent variables, such that once someone has understood the theory, they'll say "yep, I see now, that's how latent variables work!".
    LessWrong | 2 days ago
  • Condensation
    Condensation: a theory of concepts is a model of concept-formation by Sam Eisenstat. Its goals and methods resemble John Wentworth's natural abstractions/ natural latents research. Both theories seek to provide a clear picture of how to posit latent variables, such that once someone has understood the theory, they'll say "yep, I see now, that's how latent variables work!".
    AI Alignment Forum | 2 days ago
  • One Shot Singalonging is an attitude, not a skill or a song-difficulty-level*
    * by which I mean "it works pretty okay for songs of up-to-medium-high difficulty, see below". . When I seek out advice about making people more singalongable, there's a cluster of advice I get from folksinger people that... seems totally "correct", but, feels... insufficiently ambitious or something.
    LessWrong | 2 days ago
  • Insofar As I Think LLMs "Don't Really Understand Things", What Do I Mean By That?
    When I put on my LLM skeptic hat, sometimes I think things like “LLMs don’t really understand what they’re saying”. What do I even mean by that? What’s my mental model for what is and isn’t going on inside LLMs minds?. First and foremost: the phenomenon precedes the model.
    LessWrong | 2 days ago
  • Problems I've Tried to Legibilize
    Looking back, it appears that much of my intellectual output could be described as legibilizing work, or trying to make certain problems in AI risk more legible to myself and others. I've organized the relevant posts and comments into the following list, which can also serve as a partial guide to problems that may need to be further legibilized, especially beyond LW/rationalists, to AI...
    AI Alignment Forum | 2 days ago
  • Listening to Tommy Robinson
    What did I learn by giving the right-wing activist my ears for 90 minutes?
    Reasonable People | 2 days ago
  • America’s epidemic of rudeness
    When my oldest friend, a baseball historian, visited last summer, we spent a pleasant afternoon at Nationals Park watching my beloved Washington Nationals lose to the Miami Marlins. Pleasant, that is, until we left the ballpark on a packed escalator. Halfway down, a young man threw up on my friend’s leg. He laughed uproariously and, without a hint of an apology, walked away.
    Marc Gunther | 2 days ago
  • We're Not The Center of the Moral Universe
    And why that fact matters
    Bentham's Newsletter | 2 days ago
  • Pandemics are a choice
    For the first time in history, we have an opportunity to stop the next pandemic. From the earliest thinking of the Greek physician and philosopher Claudius Galen to the 19th-century British “father of epidemiology” John Snow to the years before the Covid-19 pandemic, recurring, widespread, and uncontrollable illness has been beyond the grasp of the […]...
    Future Perfect | 2 days ago
  • Upside Volatility Is Bad
    Investors often say that standard deviation is a bad way to measure investment risk because it penalizes upside volatility as well as downside. I agree that standard deviation isn’t a great measure of risk, but that’s not the reason.
    Philosophical Multicore | 2 days ago
  • The jailbreak argument against LLM values
    Bostrom (2014) defined the AI value loading problem as. how could we get some value into an artificial agent, so as to make it pursue that value as its final goal?. 1. JD Pressman thinks this is obviously solved in current LLM systems:
    argmin gravitas | 3 days ago
  • Synchronicity surface area
    Some people have all the luck.
    Thing of Things | 3 days ago
  • Omniscaling to MNIST
    In this post, I describe a mindset that is flawed, and yet helpful for choosing impactful technical AI safety research projects. The mindset is this: future AI might look very different than AI today, but good ideas are universal. If you want to develop a method that will scale up to powerful future AI systems, your method should also scale down to MNIST.
    LessWrong | 3 days ago
  • Comparing Payor & Löb
    Löb's Theorem: If ⊢□x→x, then ⊢x. Or, as one formula: □(□x→x)→□x. Payor's Lemma: If ⊢□(□x→x)→x, then ⊢x. Or, as one formula: □(□(□x→x)→x)→□x. In the following discussion, I'll say "reality" to mean x, "belief" to mean □x, "reliability" to mean □x→x (ie, belief is reliable when belief implies reality), and "trust" to mean □(□x→x) (belief-in-reliability).
    LessWrong | 3 days ago
  • Against “You can just do things”
    The barriers between us and what we want are often entirely imagined. It is true: you can learn how to paint, change careers, write a paper or run a marathon. These things are hard, but we shouldn’t pretend that they are impossible. You can just do them. But this mindset has a dangerous edge case: it can make you skip asking for permission precisely when you should ask.
    LessWrong | 3 days ago
  • Comparing Payor & Löb
    Löb's Theorem: If ⊢□x→x, then ⊢x. Or, as one formula: □(□x→x)→□x. Payor's Lemma: If ⊢□(□x→x)→x, then ⊢x. Or, as one formula: □(□(□x→x)→x)→□x. In the following discussion, I'll say "reality" to mean x, "belief" to mean □x, "reliability" to mean □x→x (ie, belief is reliable when belief implies reality), and "trust" to mean □(□x→x) (belief-in-reliability).
    AI Alignment Forum | 3 days ago
  • Pythia
    [CW: Retrocausality, omnicide, philosophy]. Alternate format: Talk to this post and its sources. Three decades ago a strange philosopher was pouring ideas onto paper in a stimulant-fueled frenzy. He wrote that ‘nothing human makes it out of the near-future’ as techno-capital acceleration sheds its biological bootloader and instantiates itself as Pythia: an entity of self-fulfilling prophecy...
    LessWrong | 3 days ago
  • Mourning a life without AI
    Recently, I looked at the one pair of winter boots I own, and I thought “I will probably never buy winter boots again.” The world as we know it probably won’t last more than a decade, and I live in a pretty warm area. I. AGI is likely in the next decade. It has basically become consensus within the AI research community that AI will surpass human capabilities sometime in the next few decades.
    LessWrong | 3 days ago
  • Escalation and perception
    Crosspost from my blog. Introduction. Conflict pervades the world. Conflict can come from mere mistkes, but many conflicts are not mere mistakes. We don't understand conflict. We doubly don't understand conflict because some conflicts masquerade as mistakes, and we wish that they were mere mistakes, so we are happy to buy in to that masquerade. This is a mistake on our part, haha.
    LessWrong | 3 days ago
  • Unexpected Things that are People
    Cross-posted from https://bengoldhaber.substack.com/. It’s widely known that Corporations are People. This is universally agreed to be a good thing; I list Target as my emergency contact and I hope it will one day be the best man at my wedding. But there are other, less well known non-human entities that have also been accorded the rank of person.
    LessWrong | 3 days ago
  • The Offsetting Puzzle
    Moral offsetting is the practice of making up for a bad or prima facie wrongful action by doing something else that is good enough to outweigh the bad act.
    Fake Nous | 3 days ago
  • Two New Infinite Paradoxes
    You're going to have to abandon some obvious principle--twice
    Bentham's Newsletter | 3 days ago
  • The tragedy of Laika, the first animal to orbit the earth
    In March, I visited the Lowell Observatory — the astronomical research site where Pluto was first discovered — in Flagstaff, Arizona. I stood in line to squint through telescopes at Jupiter and the surface of the moon before the night turned cloudy and drove me inside the Astronomy Discovery Center museum. And like all museum […]...
    Future Perfect | 3 days ago
  • Writing Your Representatives: A Worthwhile and Neglected Intervention
    Is it a good use of time to call or write your representatives to advocate for issues you care about? I did some research, and my current (weakly-to-moderately-held) belief is that messaging campaigns are very cost-effective. In this post: I look at evidence from randomized experiments, surveys of legislators’ opinions, and observational evidence.
    Philosophical Multicore | 3 days ago
  • Why “Minds Aren’t Magic”?
    A lot of people straightforwardly believe that minds are magic: that our decision making is not simply the result of electricity and biochemistry in neurons and synapses in the brain, but at the core is the product of an immortal soul. This, of course, I reject. But it seems to me that a lot of Continue reading "Why “Minds Aren’t Magic”?"...
    Minds Aren’t Magic | 4 days ago
  • Don’t Worry – It Can’t Happen
    (Originally a twitter thread) When @fermatslibrary brought up this 1940 paper about why we have nothing to worry about from nuclear chain reactions, I first checked that it was real and not a modern forgery. Because it seems almost too good to be true in the light of current AI safety talk. Yes, the paper was real: Harrington, […]...
    Andart II | 4 days ago
  • 13 Arguments About a Transition to Neuralese AIs
    Over the past year, I have talked to several people about whether they expect frontier AI companies to transition away from the current paradigm of transformer LLMs toward models that reason in neuralese within the next few years. This post summarizes 13 common arguments I’ve heard, six in favor and seven against a transition to neuralese AIs. The following table provides a summary:
    LessWrong | 4 days ago
  • Book Review: Plasticosis
    Plasticosis (free pdf available here) is a novel by Ikse Mennen about a postapocalyptic future in which it turns out that microplastics make people very, very sick with “plasticosis.” The resulting political disruption caused the United States to fracture into thousands of city-states.
    Thing of Things | 4 days ago
  • A country of alien idiots in a datacenter: AI progress and public alarm
    Epistemic status: I'm pretty sure AI will alarm the public enough to change the alignment challenge substantially. I offer my mainline scenario as an intuition pump, but I expect it to be wrong in many ways, some important. Abstract arguments are in the Race Conditions and concluding sections... . Nora has a friend in her phone. Her mom complains about her new AI "colleagues.".
    LessWrong | 4 days ago
  • Jane Goodall’s legacy is about animal rights — and the choices on our plates
    Op-ed by Michael Freeman published in The Sacramento Bee on October 30, 2025. The post Jane Goodall’s legacy is about animal rights — and the choices on our plates appeared first on Mercy For Animals.
    Mercy for Animals | 4 days ago
  • Cyber Volunteers Convene in Madison, Wisconsin
    On October 23rd, the UC Berkeley Center for Long-Term Cybersecurity and Wisconsin Emergency Management were proud to host a packed room of cyber defenders across academia, state government,…. The post Cyber Volunteers Convene in Madison, Wisconsin appeared first on CLTC.
    Center for Long-Term Cybersecurity | 4 days ago
  • Epoch’s Capabilities Index stitches together benchmarks across a wide range of difficulties
    Interpreting our new capabilities index
    Epoch Newsletter | 4 days ago
  • Toward Statistical Mechanics Of Interfaces Under Selection Pressure
    Imagine using an ML-like training process to design two simple electronic components, in series. The parameters θ1 control the function performed by the first component, and the parameters θ2 control the function performed by the second component.
    LessWrong | 4 days ago
  • Two easy digital intentionality practices
    A lot of people are daunted by the idea of doing a full digital declutter. Those people ask me all the time, “isn’t there something easier I can do that will still give me some of those sweet sweet benefits you were talking about?”. The answer is: sort of.
    LessWrong | 4 days ago
  • A scheme to credit hack policy gradient training
    Thanks to Inkhaven for making me write this, and Justis Mills, Abram Demski, Markus Strasser, Vaniver and Gwern for comments. None of them endorse this piece. The safety community has previously worried about an AI hijacking the training process to change itself in ways that it endorses, but the developers don’t.
    AI Alignment Forum | 4 days ago
  • Urgent Call for Accelerated Action on Climate-Nutrition Integration – Latest Assessment
    Urgent Call for Accelerated Action on Climate-Nutrition Integration – Latest Assessment gloireri Fri, 11/07/2025 - 19:22 London/Geneva. . For Immediate Release. Urgent Call for Accelerated Action on Climate-Nutrition Integration – Latest Assessment. Sub-Saharan Africa, Latin America and the Caribbean are leading the way.
    Global Alliance for Improved Nutrition | 4 days ago
  • More pieces we would like to commission
    Write for us.
    The Works in Progress Newsletter | 4 days ago
  • Unmasking Dairy Deception: 37,000+ Voices Demand Transparency
    The time for dairy industry deception is ending. The post Unmasking Dairy Deception: 37,000+ Voices Demand Transparency appeared first on Mercy For Animals.
    Mercy for Animals | 4 days ago
  • Growing the Coalition: Where the Metascience Alliance Is Headed
    Improving research works best when efforts are coordinated, not fragmented. The Metascience Alliance is a coalition for coordination and collaboration, connecting stakeholders, identifying and advancing shared priorities, and accelerating collective progress. Since we first introduced the Metascience Alliance at the 2025 Metascience Conference, 39 organizations have signed the Letter of...
    Center for Open Science | 4 days ago
  • Incomparability Implies Nihilism
    If there are incomparable goods then nothing we do matters
    Bentham's Newsletter | 4 days ago
  • A federal AI backstop is not as insane as it sounds
    Transformer Weekly: No B30A chips for China, Altman’s ‘pattern of lying’ and a watered down EU AI Act...
    Transformer | 4 days ago
  • Factory Farming Fuels A Climate “Doom Loop”
    Extreme weather is killing millions of farmed animals and destroying farms worldwide, creating a cycle of suffering and loss. The post Factory Farming Fuels A Climate “Doom Loop” appeared first on Faunalytics.
    Faunalytics | 4 days ago
  • A personal take on why (and why not) to work on AI safety at Open Philanthropy
    You may have noticed that Open Philanthropy is hiring for several roles in our GCR division: senior generalists across our global catastrophic risks team, and grantmakers for our technical AI safety team.
    Catherine’s Blog | 4 days ago
  • Above the Thames
    What Happens When The Centre Cannot Hold (and isn't trying)
    Manifold Markets | 4 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.