Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • The old tech that could help stop the next airborne pandemic
    It’s hard to imagine modern life without glycols. They are used in cosmetics, fog machines, and food. As you read this, you’re almost certainly wearing or drinking from something they were used to produce — polyester fabric or plastic bottles, for example. If you brush your teeth with toothpaste or top your salad with bottled […]...
    Future Perfect | 19 minutes ago
  • Elon Musk could lose his case against OpenAI — and still get what he wants
    So, what’s a guy got to do to become a billionaire around here? Greg Brockman scribbled the question in his diary, recently unsealed as trial evidence, just two years after co-founding OpenAI as a charity in 2015: “Financially, what will take me to $1B?” For Brockman, now OpenAI’s president, the answer was a yearslong restructuring […]...
    Future Perfect | 49 minutes ago
  • Three Model Organisms For Taste
    Astral Codex Ten | 2 hours ago
  • Strengthening County Financing for Sustainable Community Health Systems in Kenya
    The post Strengthening County Financing for Sustainable Community Health Systems in Kenya appeared first on Living Goods.
    Living Goods | 4 hours ago
  • LGT Venture Philanthropy renews partnership with Living Goods to strengthen community health systems
    The post LGT Venture Philanthropy renews partnership with Living Goods to strengthen community health systems appeared first on Living Goods.
    Living Goods | 4 hours ago
  • How we sent $235 to families hit by a Philippines earthquake – a first for the country
    A powerful 6.9 magnitude earthquake devastated Northern Cebu On September 30, 2025, a magnitude 6.9 earthquake struck off the coast of Bogo City in Cebu Province, displacing approximately 90,000 people, damaging or destroying more than 195,000 homes and impacting~ 753,000 people. It was followed by 12,000 aftershocks. For families already living in damaged homes, the […]...
    GiveDirectly | 7 hours ago
  • The tables have turned on AI sceptics
    Could we have human-level AI within the next few decades? For a long time, many people have dismissed this idea as armchair speculation. In their view, we shouldn’t ground our beliefs about transformative technologies in vague hunches and fragile multi-step arguments. We need more solid evidence, like clear empirical trends. We need to be epistemically conservative.
    Effective Altruism Forum | 8 hours ago
  • AirPod cameras 🎧, GPT-Realtime-2 🤖, Cloudflare's AI layoffs 💼
    TLDR AI | 11 hours ago
  • Mechanistic estimation for wide random MLPs
    This post covers joint work with Wilson Wu, George Robinson, Mike Winer, Victor Lecomte and Paul Christiano. Thanks to Geoffrey Irving and Jess Riedel for comments on the post. In ARC's latest paper, we study the following problem: given a randomly initialized multilayer perceptron (MLP), produce an estimate for the expected output of the model under Gaussian input.
    LessWrong | 13 hours ago
  • Why Light-Touch AI Safety Rules Can Matter
    "it does sometimes feel like very light touch requirements, like SB-53, or like you're spitting into a wildfire or something." "But like, it's a start, right?" "Maybe now you have to have an outside organization verify that you followed your safety and security policy."
    Future of Life Institute | 14 hours ago
  • Why Minimal AI Rules Still Face Industry Opposition
    "I think it's like very hard to pass AI legislation in the US right now at the federal level, but also even at the state level." "Leave the document that's currently on your website on your website." "And even this kind of has companies like, you know, screaming, wailing, and gnashing their teeth and running their clothes about how oppressed they are by overregulation, right?"
    Future of Life Institute | 14 hours ago
  • How AI Could Centralize Presidential Control of Bureaucracy
    "This is like the deep state problem, right?" "But if you have like loyal AI subordinates in every agency that kind of solves that problem, it's just like, oh, align it to whatever the president wants." "That's like a kind of scary prospect."
    Future of Life Institute | 14 hours ago
  • Americans: call your senators today to stop the Save Our Bacon Act
    The Farm Bill currently under consideration by the U.S.
    Thing of Things | 14 hours ago
  • Natural Language Autoencoders Produce Unsupervised Explanations of LLM Activations
    Abstract. We introduce Natural Language Autoencoders (NLAs), an unsupervised method for generating natural language explanations of LLM activations. An NLA consists of two LLM modules: an activation verbalizer (AV) that maps an activation to a text description and an activation reconstructor (AR) that maps the description back to an activation.
    LessWrong | 14 hours ago
  • Natural Language Autoencoders Produce Unsupervised Explanations of LLM Activations
    Abstract. We introduce Natural Language Autoencoders (NLAs), an unsupervised method for generating natural language explanations of LLM activations. An NLA consists of two LLM modules: an activation verbalizer (AV) that maps an activation to a text description and an activation reconstructor (AR) that maps the description back to an activation.
    AI Alignment Forum | 14 hours ago
  • Try, even if they have you cold
    I think smart people try things less often than they should, because of a cached mental pattern where you think of what might go wrong, and you find a foolproof countermeasure on the part of some antag, and so we call it off. Stockfish, playing itself, might as well resign from the first move if you force it to give knight odds. Sensei(the Go AI), should do the same when it has to give 6 stones.
    LessWrong | 15 hours ago
  • It May Be Possible to Improvise A High Grade Bioshelter
    Surviving an environment-to-human pathogen would require widespread protection from airborne exposure, indoors and out. We think this may be achievable using improvised bioshelters and PPE made from household materials, though this hypothesis still needs more testing.
    Defenses in Depth | 15 hours ago
  • Prioritizing Environment-to-Human Biological Threats
    Pathogens that replicate in the environment and transmit to humans pose a uniquely direct existential risk, far more so than those that spread person-to-person or can't grow outside a host. Of the possible exposure routes, airborne transmission is by far the hardest to defend against.
    Defenses in Depth | 15 hours ago
  • Save Our Pigs!
    Note: This post was crossposted from the Coefficient Giving Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Subtitle: The pork lobby is one farm bill away from gutting our strongest farm animal welfare laws.
    Effective Altruism Forum | 15 hours ago
  • A review of “Investigating the consequences of accidentally grading CoT during RL”
    Last week, OpenAI staff shared an early draft of Investigating the consequences of accidentally grading CoT during RL with Redwood Research staff. To start with, I appreciate them publishing this post. I think it is valuable for AI companies to be transparent about problems like these when they arise.
    LessWrong | 16 hours ago
  • A review of “Investigating the consequences of accidentally grading CoT during RL”
    Last week, OpenAI staff shared an early draft of Investigating the consequences of accidentally grading CoT during RL with Redwood Research staff.
    Redwood Research | 17 hours ago
  • How to Govern AI When You Can't Predict the Future (with Charlie Bullock)
    Charlie Bullock is a Senior Research Fellow at the Institute for Law and AI. He joins the podcast to discuss radical optionality: how governments can prepare for very advanced AI without locking in premature rules. The conversation covers why law often trails technology, and how transparency, reporting, evaluations, cybersecurity standards, and expanded technical hiring could help.
    Future of Life Institute | 18 hours ago
  • The New Bio Frontier
    Center for Security and Emerging Technology | 18 hours ago
  • Yoshua Bengio thinks he knows how to build safe superintelligence
    The post Yoshua Bengio thinks he knows how to build safe superintelligence appeared first on 80,000 Hours.
    80,000 Hours | 18 hours ago
  • How to Govern AI When You Can't Predict the Future (with Charlie Bullock)
    Charlie Bullock is a Senior Research Fellow at the Institute for Law and AI. He joins the podcast to discuss radical optionality: how governments can prepare for very advanced AI without locking in premature rules. The conversation covers why law often trails technology, and how transparency, reporting, evaluations, cybersecurity standards, and expanded technical hiring could help.
    Future of Life Institute | 18 hours ago
  • Save Our Pigs!
    The pork lobby is one farm bill away from gutting our strongest farm animal welfare laws
    Farm Animal Welfare Newsletter | 18 hours ago
  • Expression of Interest: Chief of Staff (Operations Team)
    The post Expression of Interest: Chief of Staff (Operations Team) appeared first on 80,000 Hours.
    80,000 Hours | 18 hours ago
  • Mechanistic estimation for wide random MLPs
    This post covers joint work with Wilson Wu, George Robinson, Mike Winer, Victor Lecomte and Paul Christiano. Thanks to Geoffrey Irving and Jess Riedel for comments on the post. In ARC's latest paper, we study the following problem: given a randomly initialized multilayer perceptron (MLP), produce an estimate for the expected output of the model under Gaussian input.
    AI Alignment Forum | 18 hours ago
  • The tables have turned on AI sceptics
    Epistemic conservatism no longer favours long timelines
    The Update | 19 hours ago
  • The Lives of British Animals
    The conditions for British farm animals are nightmarishly bad
    Bentham's Newsletter | 19 hours ago
  • New Statistical Method Reveals Flaws In Shelter Length-Of-Stay Calculations
    Traditional shelter length-of-stay calculations are misleading. Using a corrected statistical approach more accurately captures operational changes and resource needs. The post New Statistical Method Reveals Flaws In Shelter Length-Of-Stay Calculations appeared first on Faunalytics.
    Faunalytics | 20 hours ago
  • EU AI Act meets AI Agents
    Highlights from Tech Policy Press article “The EU AI Act is Not Ready for Agents,” examining how the EU AI Act applies to AI agents and governance challenges. The post EU AI Act meets AI Agents appeared first on The Future Society.
    The Future Society | 20 hours ago
  • Study Report: Is Personality 4, 5, or 6-Dimensional?
    Note: This is a longer and more technical report of our study into personality traits. If you want to see the shorter, more layperson-friendly version, click here. There's a debate that has raged in academic journals and among personality researchers about the nature of humans: how many dimensions does it take to best represent a person's personality?
    Clearer Thinking | 20 hours ago
  • How Many Traits Make Up Your Personality?
    Short of time? Read the key takeaways. Some personality models are empirically derived. The Big Five personality model analyzes personality in terms of: Openness (to Experience), Conscientiousness, Extraversion, Agreeableness, and Neuroticism (also known as ‘emotional instability’) This model emerged from the lexical hypothesis, which claimed if a personality difference matters, languages will...
    Clearer Thinking | 20 hours ago
  • Why AI Unemployment Could Resist Worker Adaptation
    "We're talking about potential AI systems that don't just like substitute for some forms of work, but actually substitute for all forms of work, such that like a human couldn't necessarily find a different job because the AI would be able to do that job too." "And so this could potentially yield just incredibly high unemployment rates, like unsustainably high."...
    Future of Life Institute | 21 hours ago
  • Why AI Is Not Like a Toaster
    "I think the biggest thing for me is just the agency." "But when it comes to these AI systems, like they're not being built like typical software." "It's kind of like if you built a bigger toaster and then all of a sudden your toaster could like hack the internet in addition to making toast."
    Future of Life Institute | 21 hours ago
  • Claude Mythos and Superhuman Vulnerability Discovery
    "these trends are already so ginormously fast." "Like I think maybe I wasn't expecting that the current trend would result in like superhuman vulnerability discovery happening this early on." "And I think there's just like very clear and compelling evidence that this AI system is like indeed exceeding human professionals in vulnerability discovery."
    Future of Life Institute | 21 hours ago
  • Happier Lives Fund: second round of disbursements (Q1 2026)
    Thanks to the generosity of our donors, the Happier Lives Fund (HLF) is growing, and so is its impact. We have now completed our second round of disbursements to our recommended charities, covering donations received in Q1 2026. Here is what that looks like in practice. How much did the HLF raise in Q1 […] Source.
    Happier Lives Institute | 21 hours ago
  • Anastasia Gamick | FROs for Fundamental Capabilities @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. . Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 22 hours ago
  • Kearney Capuano: From studying neuroscience to helping others at scale | Effective Altruism Stories
    Kearney Capano always wanted to help others but nothing ever felt good enough. She would volunteer and work at nonprofits, but there were always more people to be helped, more suffering to address. In university, she joined a neuroscience lab studying people who donate one of their kidneys to a complete stranger — trying to understand what drives that kind of selflessness.
    Centre for Effective Altruism | 22 hours ago
  • What is Unjust Discrimination?
    A systematic account
    Good Thoughts | 22 hours ago
  • New EA Forum LLM-use policy
    This policy does not apply to anything posted before this post's time of publication. New policy: You are welcome to use AI to help you write posts, but we ask that you disclose it when you do. Not disclosing that your post is AI-assisted could mean a rate-limit or a ban. We won’t enforce this policy for comments and quick takes, though we’d appreciate a norm of disclosure there as well. .
    Effective Altruism Forum | 23 hours ago
  • There is no evidence you should reapply sunscreen every 2 hours.
    It’s incredible how many consensus guidelines dissolve when you look closely at them. . If you listen to any authority on the subject of sunscreen, you will hear it endlessly repeated that you absolutely must reapply sunscreen every 2 hours while you are in the sun, and immediately after swimming, sweating, or exercising.
    LessWrong | 23 hours ago
  • The EA case for an EA Group House + how to start one (its easy!)
    I started 2 EA(ish) group houses now, so I figured there's an opportunity to share my experience and how you too can start one!. There's a whole substack dedicated to community living, so I'll stick to the EA lens of it. Note: My experiences are based in NYC and SF, which have a nice flow of travelers & concentration of like-minded folks.
    Effective Altruism Forum | 23 hours ago
  • The Work Undone
    please explain me
    Atoms vs Bits | 24 hours ago
  • AMA: Svetha Janumpalli, CEO and Founder of New Incentives
    I'm Svetha Janumpalli, founder and CEO of New Incentives. We run a conditional cash transfer program in northern Nigeria that provides small incentives to caregivers to complete routine infant vaccination schedules. Today, we operate across more than 7,000 clinics and have enrolled 6.8 million infants.
    Effective Altruism Forum | 24 hours ago
  • Target Malaria Uganda Joins World Malaria Day Commemoration in Iganga
    Target Malaria Uganda took part in the national commemoration of World Malaria Day in Iganga, held at Bulamagi Subcounty grounds under the theme: “Driven to End Malaria: Now We Can, Now We Must.” The event also combined the graduation of over 100 Community Health Extension Workers (CHEWs). The function, held on April 24, 2026, was […].
    Target Malaria | 1 days ago
  • Target Malaria Uganda Joins World Malaria Day Commemoration in Iganga
    Target Malaria Uganda took part in the national commemoration of World Malaria Day in Iganga, held at Bulamagi Subcounty grounds under the theme: “Driven to End Malaria: Now We Can, Now We Must.” The event also combined the graduation of over 100 Community Health Extension Workers (CHEWs). The function, held on April 24, 2026, was […].
    Target Malaria | 1 days ago
  • Ireland's AI Research Gap
    Why are we absent from frontier research?
    The Fitzwilliam | 1 days ago
  • Contra Everyone On Taste
    Astral Codex Ten | 1 days ago
  • Hidden Open Thread 432.5
    Astral Codex Ten | 1 days ago
  • Effective Altruism focused on bednets while a malaria vaccine was stuck for 35 years. The case for Abundance.
    This post was cross-posted from Positive Sum by the Forum team. The author notes: I'm not saying every abundance goal meets this bar, e.g. high speed rail in America would not. This post is intended as a clarifying abundance's relation to EA, rather than a criticism of EA prioritization. Subtitle: Functional governance and democracy helps many EA cause areas.
    Effective Altruism Forum | 1 days ago
  • Anthropic + xAI Colossus 🤝, Google Expert Advice 💬, design from the inside 🧑‍🎨
    TLDR AI | 1 days ago
  • Many individual CEVs are probably quite bad
    I was thinking about Habryka's article on Putin's CEV, but I am posting my response here, because the original article is already 3 weeks old. I am not sure how exactly a person's CEV is defined.
    LessWrong | 2 days ago
  • Considerations for PPE Strategy
    A misaligned AI or human-AI group could attempt takeover by releasing a highly transmissible engineered pathogen. I discuss what a PPE strategy aimed at this threat model needs to get right.
    Defenses in Depth | 2 days ago
  • US Farm Bill alert, SE Asia incubator, and new global slaughter data
    Your farmed animal advocacy update for early May 2026
    Hive | 2 days ago
  • I made a graphic of the expanding moral circle - free to use
    The "expanding moral circle" -- the idea that moral concern has (or, at least, should) widened over time from family, to community, to nation, to all humanity, and (arguably) outward to all sentient beings -- was developed by W.E.H. Lecky (1869) and popularized by Peter Singer in The Expanding Circle (1981).
    Effective Altruism Forum | 2 days ago
  • x-risk-themed
    Sometimes, a friend who works around here, at an x-risk-themed organisation, will think about leaving their job. They’ll ask a group of people “what should I do instead?”. And everyone will chime in with ideas for other x-risk-themed orgs that they could join.
    LessWrong | 2 days ago
  • Why I Find Woke Criticism of Veganism and Effective Altruism So Outrageous
    Using the language of the oppressed to justify ignoring their interests
    Bentham's Newsletter | 2 days ago
  • Faunalytics Index – May 2026
    This month’s Faunalytics Index provides facts and stats about the welfare of egg-laying ducks in Indonesia, a program to help unhoused people and their companion animals, misperceptions about honeybees, and more. The post Faunalytics Index – May 2026 appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Palantir’s controversy is the product
    Palantir’s fiery rhetoric helps mystify its mostly mundane tech — propping up its share price and preserving its national security contracts...
    Transformer | 2 days ago
  • We might only get one real attempt at superintelligence
    Rational Animations | 2 days ago
  • Future-Proofing EU AI Gigafactories: Four Design Imperatives
    The EU's AI Gigafactory initiative is its largest planned compute investment to date. Our new memo identifies four imperatives that the initiative must address to deliver on Europe's frontier AI ambitions. The post Future-Proofing EU AI Gigafactories: Four Design Imperatives appeared first on The Future Society.
    The Future Society | 2 days ago
  • Surviving Mirror Life: A Manual for Resilience in Buildings: Introduction to the threat, concepts and scenario parameters
    Epistemic certainty: Obviously loads of uncertainty on mirror life risks and the degree to which we'd have to pressurize buildings or filter outdoor air. Moderately high certainty for the best hasty pathways for doing this in North American and a narrow subset of European buildings. Lower certainty as we move towards international buildings.
    Effective Altruism Forum | 2 days ago
  • What if LLMs are mostly crystallized intelligence?
    Summary: LLMs are better at developing crystallized intelligence than fluid intelligence. That is: LLM training is good at building crystallized intelligence by learning patterns from training data, and this is sufficient to make them surprisingly skillful at lots of tasks.
    LessWrong | 2 days ago
  • EA Forum Digest #290
    EA Forum Digest #290 Strategic AI debates, everyday impact, and what’s happening across EA Hello!. No news this week, enjoy the digest. — Toby (for the Forum team) We recommend: Open strategic questions for digital minds (Lucius Caviola, 15 min). AIM's new charity taxonomy (Aidan Alexander, Morgan Fairless, Ambitious Impact, 13 min).
    EA Forum Digest | 2 days ago
  • AI Now is Hiring!
    We are at a pivotal moment in the fight to shape the future of AI and its role in society. AI Now is scaling up our team to meet the moment, looking to make three hires to help us grow the organization as we enter our next phase: More information on each role can be […]. The post AI Now is Hiring! appeared first on AI Now Institute.
    AI Now Institute | 2 days ago
  • AI Now Is Hiring a Comms Associate
    We are looking for a high-touch, digitally savvy communications professional to support the organization’s external presence across a range of channels. The Communications Associate will be a primary point of contact for engagement with the public and press, working in close partnership with our Senior Director and wider team to execute our comms strategy. We […].
    AI Now Institute | 2 days ago
  • AI Now Is Hiring a Senior Operations Director
    We’re looking for a senior leader to support the organization through this next phase of growth. Experienced and results-driven, this individual will have a finger to the pulse of the organization, working in close partnership with our Senior Director to build the systems and processes necessary for our team to thrive. This role requires a […].
    AI Now Institute | 2 days ago
  • AI Now Is Hiring a Program Associate
    We’re looking for a Program Associate to help execute our programs so they can be maximally impactful. With a bias to action and high degree of attention to detail, this individual will work at the frontline of executing AI Now’s flagship reports and events, providing support to the Senior Director across the range of projects […].
    AI Now Institute | 2 days ago
  • We grew ~10x last year, and are now planning for the next 10x
    Hey folks! We’ve recently done an internal impact assessment and thought it would be helpful to share its highlights. (Due to capacity constraints, we opted to share the current post rather than wait for a longer and more polished one, but we’re happy to answer questions.). For context, our goal at Probably Good is to help people build careers that are good for them and for the world.
    Effective Altruism Forum | 2 days ago
  • Useful conversations & resources from our Slack community
    Hive Slack Threads: April
    Hive | 2 days ago
  • The backlash to Billie Eilish’s vegan comments explains a lot about the American left (and everyone else)
    Last week, in a video interview with Elle magazine, the pop star Billie Eilish was asked the following question: “What’s one hill you’d die on?”  “Y’all not gonna like me for this one,” Eilish said. “Eating meat is inherently wrong.”  She then added that it’s hypocritical to say you love all animals but also eat […]...
    Future Perfect | 2 days ago
  • With Me For My Looks
    what's the solution?
    Atoms vs Bits | 2 days ago
  • How Cleaner Salt Production in Tanga Is Improving Nutrition Outcomes
    How Cleaner Salt Production in Tanga Is Improving Nutrition Outcomes dwaweru Wed, 05/06/2026 - 10:09 When you ask families in Tanga what salt means to them, the answer is often simple: “It’s something we cook with every day.” Yet few realise that the quality of that salt; its purity, safety, and level of iodization; directly affects the health of households, particularly children and pregnant...
    Global Alliance for Improved Nutrition | 2 days ago
  • The end of the fallout bunker
    In Germany, at least
    Existential Crunch | 2 days ago
  • Your rights when flying to Europe
    Europe (and the UK) have strong protections for flyers in the case of delayed or cancelled flights. However very few people are aware of these, and airlines will almost always try to wriggle out of paying up. Even travel agents are often unaware of these laws, or unwilling to fight the airline for you.
    LessWrong | 2 days ago
  • A draft honesty policy for credible communication with AI systems
    If humans and advanced AI systems are going to cooperate—to make honest deals and avoid negative-sum conflict—AIs will need reasons to trust us. By default, they won't have many: humans routinely lie to AIs in evaluations, and developers control much of what models see and believe. We share a sample honesty policy that AI companies could adopt.
    Forethought | 2 days ago
  • Cómo la IA podría generar los mayores problemas del mundo
    ¿Podría ser el trabajo en los riesgos asociados a la IA la elección de carrera profesional con mayor impacto en la actualidad? Descubre por qué la IA puede provocar un cambio social rápido y drástico, y qué puedes hacer al respecto.
    Altruismo Eficaz | 2 days ago
  • Rust in Numbers
    Why do manure spreaders have life cycles?.
    Asterisk | 2 days ago
  • iOS 3rd party AI 🤖, OpenAI phone 2027 📱, compounding AI work 📈
    TLDR AI | 2 days ago
  • Model Spec Midtraining: Improving How Alignment Training Generalizes
    tl;dr We introduce model spec midtraining (MSM): after pre-training but before alignment fine-tuning, we train models on synthetic documents discussing their Model Spec, teaching them how they should behave and why. This controls how models generalize from subsequent alignment training—for example, two models with identical fine-tuning can generalize to different values depending on how MSM...
    LessWrong | 2 days ago
  • Is AI really a bubble?
    "A technology can be a bubble and still be real. The dot-com bubble was a bubble. The internet was real.". In 2021, experts predicted AI would win a Math Olympiad gold medal in 22 years. It happened last year. A few weeks ago, GPT 5.2 published a novel result in physics. Now the AI companies are openly working on AIs that build smarter AIs that build smarter AIs.
    Machine Intelligence Research Institute | 2 days ago
  • May is Healthy Vision Month. May 10 is Mother’s Day. This is what it looks like to protect a child’s future.
    If you have children, or have ever been around a one-year-old, you know they are into everything. It is the age of eager discovery; of reaching, crawling, and finally finding your feet. Sahil is no different. He has that same drive to explore, but for the first year of his life, he just couldn’t see …. The post May is Healthy Vision Month. May 10 is Mother’s Day.
    Seva Foundation | 3 days ago
  • A Pro-Supply To-Do List for Congress’s Housing Bill
    America can’t afford a lowest-common-denominator housing supply bill
    Institute for Progress | 3 days ago
  • RIP Classic Reasoning Benchmarks. What’s Next?
    Give up at least one of: text only, short time horizon, easy to grade, and expert human superiority.
    Epoch Newsletter | 3 days ago
  • What holds AI safety together? Co-authorship networks from 200 papers
    We (social science PhD students) computed co-authorship networks based on a corpus of 200 AI safety papers covering 2015-2025, and we’d like your help checking if the underlying dataset is right. Co-authorship networks make visible the relative prominence of entities involved in AI safety research, and trace relationships between them.
    LessWrong | 3 days ago
  • Let Kids Keep More Productivity Gains
    While I was traveling Julia asked me: why is Anna saying her fiddle practice is only two minutes? In this case, two minutes was the right amount of time! . Anna (10y) and I had been fighting a lot about practice. She'd complain, slump, stop repeatedly to make adjustments, and generally be miserable.
    LessWrong | 3 days ago
  • Does your AI perform badly because you — you, specifically — are a bad person
    Claude really got me lately. I’d given it an elaborate prompt in an attempt to summon an AGI-level answer to my third-grade level question. Embarrassingly, it included the phrase, “this work might be reviewed by probability theorists, who are very pedantic”. Claude didn’t miss a beat.
    LessWrong | 3 days ago
  • AI risk was not invented by AI CEOs to hype their companies
    I hear that many people believe that the idea of advanced AI threatening human existence was invented by AI CEOs to hype their products. I’ve even been condescendingly informed of this, as if I am the one at risk of naively accepting AI companies’ preferred narratives. If you are reading this, you are probably familiar enough with the decades-old AI safety community to know this isn’t true.
    LessWrong | 3 days ago
  • [Linkpost] Interpreting Language Model Parameters
    This is the latest work in our Parameter Decomposition agenda. We introduce a new parameter decomposition method, adVersarial Parameter Decomposition (VPD) and decompose the parameters of a small language model with it. VPD greatly improves on our previous techniques, Stochastic Parameter Decomposition (SPD) and Attribution-based Parameter Decomposition (APD).
    LessWrong | 3 days ago
  • [Linkpost] Interpreting Language Model Parameters
    This is the latest work in our Parameter Decomposition agenda. We introduce a new parameter decomposition method, adVersarial Parameter Decomposition (VPD) and decompose the parameters of a small language model with it. VPD greatly improves on our previous techniques, Stochastic Parameter Decomposition (SPD) and Attribution-based Parameter Decomposition (APD).
    AI Alignment Forum | 3 days ago
  • New Book — Compassionate Purpose: Personal Inspiration for a Better World
    “How are we to live, in a world in which there is so much unnecessary suffering? Magnus Vinding looks unflinchingly at that question, and gives an answer that is realistic, and yet inspiring. Read this book. It may change your life.”. — Peter Singer, author of Animal Liberation. I have just published a book:
    Effective Altruism Forum | 3 days ago
  • Motivated reasoning, confirmation bias, and AI risk theory
    Of the fifty-odd biases discovered by Kahneman, Tversky, and their successors, forty-nine are cute quirks, and one is destroying civilization. This last one is confirmation bias. - From Scott Alexander's review of Julia Galef's The Scout Mindset.
    AI Alignment Forum | 3 days ago
  • What’s new in biology: May 2026
    A cure for congenital deafness, recreating snake venom, antibodies, a legend in cardiovascular medicine, and a successful hair loss treatment?
    The Works in Progress Newsletter | 3 days ago
  • The Best Argument Against Deontology Is About Suitcases
    Pack it up deontologists!
    Bentham's Newsletter | 3 days ago
  • What Tourists Will (And Won’t) Pay For Whale Watching
    A study of blue whale watchers in Mexico finds that boat crowding more than whale numbers shapes what tourists are willing to pay, with implications for animal welfare, local economies, and conservation. The post What Tourists Will (And Won’t) Pay For Whale Watching appeared first on Faunalytics.
    Faunalytics | 3 days ago
  • Business Operations Associate / Specialist
    The post Business Operations Associate / Specialist appeared first on 80,000 Hours.
    80,000 Hours | 3 days ago
  • People Operations Associate / Specialist
    The post People Operations Associate / Specialist appeared first on 80,000 Hours.
    80,000 Hours | 3 days ago
  • Recruiting Associate / Specialist
    The post Recruiting Associate / Specialist appeared first on 80,000 Hours.
    80,000 Hours | 3 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.