Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Sora is here. The window to save visual truth is closing
    Opinion: Sam Gregory argues that generative video is undermining the notion of a shared reality, and that we need to act before it’s lost forever...
    Transformer | 53 minutes ago
  • The Protein Problem
    We likely can't stop the protein craze — but we can make it less cruel...
    Farm Animal Welfare Newsletter | 1 hours ago
  • AI Isn't Bad For The Environment
    Footnotes to Masley
    Bentham's Newsletter | 2 hours ago
  • Faunalytics Index – November 2025
    This month’s Faunalytics Index provides facts and stats about the enforcement of U.K. farmed animal welfare laws, ocean theme parks in China, BIPOC adoption experiences, and more. The post Faunalytics Index – November 2025 appeared first on Faunalytics.
    Faunalytics | 2 hours ago
  • Helen Toner on the geopolitics of AI in China and the Middle East
    The post Helen Toner on the geopolitics of AI in China and the Middle East appeared first on 80,000 Hours.
    80,000 Hours | 2 hours ago
  • Being "Usefully Concrete"
    Or: "Who, what, when, where?" -> "Why?". . In "What's hard about this? What can I do about that? ", I talk about how, when you're facing a difficult situation, it's often useful to list exactly what's difficult about it. And then, systematically brainstorm ideas for dealing with those difficult things. Then, the problem becomes easy.
    LessWrong | 3 hours ago
  • Modeling the geopolitics of AI development
    We model how rapid AI development may reshape geopolitics in the absence of international coordination on preventing dangerous AI development. We focus on predicting which strategies would be pursued by superpowers and middle powers and which outcomes would result from them. You can read our paper here: ai-scenarios.com.
    LessWrong | 3 hours ago
  • Evolution under a microscope
    Generations of microbes evolve in hours, not millennia. By speeding up Darwin’s clock, scientists have watched evolution happen in real time, and it’s changed how we understand natural selection.
    The Works in Progress Newsletter | 4 hours ago
  • Is This Anything? 21
    More attempts
    Atoms vs Bits | 4 hours ago
  • Our top tips for successful networking
    The post Our top tips for successful networking appeared first on 80,000 Hours.
    80,000 Hours | 6 hours ago
  • Are we wrong to stop factory farms?
    I'm sharing an article by Rose Patterson from Animal Rising (AR), which responds to an EA criticism of AR's campaigns to block new factory farms in the UK. When we started our Communities Against Factory Farming campaign to stop every new factory farm from being built, we thought it would be a crowd-pleaser! Who in the animal movement could argue that this could actually be a bad thing?
    Effective Altruism Forum | 6 hours ago
  • EA Forum Digest #265
    EA Forum Digest #265 Hello!. Two giving season updates: I’ve announced the rewards for donors to our Donation Election Fund - you can read more here. Next week will be Funding Strategy Week on the EA Forum. Consider writing something!. Enjoy the Digest, Toby (for the Forum team) We recommend: Humanizing Expected Value (kuhanj, 3 min).
    EA Forum Digest | 7 hours ago
  • AnimalHarmBench 2.0: Evaluating LLMs on reasoning about animal welfare
    We are pleased to introduce AnimalHarmBench (AHB) 2.0, a new standardized LLM benchmark designed to measure multi-dimensional moral reasoning towards animals, now available to use on Inspect AI. As LLM's influence over policies and behaviors of humanity grows, its biases and blind spots will grow in importance too.
    Effective Altruism Forum | 7 hours ago
  • New study  on  cluster  randomised controlled trials  for  mosquito  release interventions  such as gene drive
    New technologies for suppressing populations of disease-transmitting mosquitoes involve releasing modified male mosquitoes into field environments. One example of these technologies is the sterile insect technique (SIT) which releases male mosquitoes that have been sterilised using radiation treatment.
    Target Malaria | 8 hours ago
  • November Brief | Your guide to high-impact giving this season
    November Brief | Your guide to high-impact giving this season New opportunities at Open Philanthropy, The Humane League, Future of Life Institute & more ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌...
    EACN Newsletter | 8 hours ago
  • Legible vs. Illegible AI Safety Problems
    Some AI safety problems are legible (obvious or understandable) to company leaders and government policymakers, implying they are unlikely to deploy or allow deployment of an AI while those problems remain open (i.e., appear unsolved according to the information they have access to).
    Effective Altruism Forum | 8 hours ago
  • Pillar 4: CHEWs Boosting Healthy Living Through PDM
    The post Pillar 4: CHEWs Boosting Healthy Living Through PDM appeared first on Living Goods.
    Living Goods | 11 hours ago
  • Nyong’o launches Kisumu wellness protocol to boost preventive healthcare
    The post Nyong’o launches Kisumu wellness protocol to boost preventive healthcare appeared first on Living Goods.
    Living Goods | 11 hours ago
  • Stall ohne Dach: Auslauf oder nicht?
    Sentience Politics | 12 hours ago
  • You are going to get priced out of the best AI coding tools
    The best AI tools will become far more expensive. Andy Warhol famously said:
    AI Safety Takes | 13 hours ago
  • Notes on forecasting strategy
    Becoming a top forecaster may not work exactly how you think it does...
    Samstack | 15 hours ago
  • Thoughts by a non-economist on AI and economics
    [Crossposted on Windows In Theory] . “Modern humans first emerged about 100,000 years ago. For the next 99,800 years or so, nothing happened. Well, not quite nothing. There were wars, political intrigue, the invention of agriculture -- but none of that stuff had much effect on the quality of people's lives.
    LessWrong | 15 hours ago
  • New CLTC Guide Focuses on Cybersecurity for Mutual Aid Organizations
    The UC Berkeley Cybersecurity Clinic and Fight for the Future have collaborated to create a guide aimed at enhancing the cybersecurity practices of mutual aid organizations. Released as part of the CLTC White Paper Series, the guide — Securing Mutual Aid: Cybersecurity Practices and Design Principles for Financial Technology — outlines best practices to help mutual aids use financial...
    Center for Long-Term Cybersecurity | 17 hours ago
  • Heroic Responsibility
    Meta: Heroic responsibility is a standard concept on LessWrong. I was surprised to find that we don't have a post explaining it to people not already deep in the cultural context, so I wrote this one. Suppose I decide to start a business - specifically a car dealership. One day there's a problem: we sold a car with a bad thingamabob.
    LessWrong | 18 hours ago
  • Low cost MacBook 💻, Google AI data centers 🛰️, inside xAI 🤖
    TLDR AI | 18 hours ago
  • Temporary Policy Lead at NARN
    Policy and Programs Lead (temp). Organization: Northwest Animal Rights Network (NARN) Location: Remote (Washington-based preferred) Hours: 30 hours per week Compensation: $21-$26/hr Reports to: NARN Board. About NARN. The Northwest Animal Rights Network (NARN) advocates for the rights and well-being of all animals through education, advocacy, and community building.
    Animal Advocacy Forum | 19 hours ago
  • Legible vs. Illegible AI Safety Problems
    Some AI safety problems are legible (obvious or understandable) to company leaders and government policymakers, implying they are unlikely to deploy or allow deployment of an AI while those problems remain open (i.e., appear unsolved according to the information they have access to).
    LessWrong | 19 hours ago
  • Breaking: Hundreds of Farmers Sent a Strong Message on Capitol Hill: Congress Should Leave Prop 12 Alone
    In October, over 200 farmers from across the nation came together on Capitol Hill in support of California’s Proposition 12 and in opposition to the EATS Act and any legislative language like it. The farmers held an impactful press conference at the National Press Building, engaged in a tractor and truck rally on the Hill, […].
    Mercy for Animals | 19 hours ago
  • Legible vs. Illegible AI Safety Problems
    Some AI safety problems are legible (obvious or understandable) to company leaders and government policymakers, implying they are unlikely to deploy or allow deployment of an AI while those problems remain open (i.e., appear unsolved according to the information they have access to).
    AI Alignment Forum | 20 hours ago
  • Scale-up: the neglected bottleneck facing alternative proteins
    Hi everyone! I'm Alex, Managing Director of the Good Food Institute Europe. Thanks so much for taking the time to read my post on why scale-up is so important for the future of alternative proteins. This is a topic we're really passionate about at GFI Europe, and one that's becoming increasingly important as the sector matures. I’ll be around in the comments and happy to answer any questions.
    Effective Altruism Forum | 20 hours ago
  • Why We Don’t Act On Our Values (And What to Do About It)
    Discover why people often fail to act on their own values — and how to close the gap between what you care about and what you do. Learn how despair, denial, and defiance hold us back, and find research-based ways to live more in line with your principles.
    Clearer Thinking | 20 hours ago
  • GDM: Consistency Training Helps Limit Sycophancy and Jailbreaks in Gemini 2.5 Flash
    Authors: Alex Irpan* and Alex Turner*, Mark Kurzeja, David Elson, and Rohin Shah. You’re absolutely right to start reading this post! What a rational decision!. Even the smartest models’ factuality or refusal training can be compromised by simple changes to a prompt.
    LessWrong | 21 hours ago
  • Introducing the Frontier Data Centers Hub
    We announce our new Frontier Data Centers Hub, a database tracking large AI data centers using satellite and permit data to show compute, power use, and construction timelines.
    Epoch Newsletter | 23 hours ago
  • Comparative advantage & AI
    Also on my substack. I was recently saddened to see that Seb Krier – who's a lead for frontier policy at Google DeepMind – created a simple website apparently endorsing the idea that Ricardian comparative advantage will provide humans with jobs in the time of ASI. The argument that comparative advantage means advanced AI is automatically safe is pretty old and has been addressed multiple times.
    LessWrong | 23 hours ago
  • GDM: Consistency Training Helps Limit Sycophancy and Jailbreaks in Gemini 2.5 Flash
    Authors: Alex Irpan* and Alex Turner*, Mark Kurzeja, David Elson, and Rohin Shah. You’re absolutely right to start reading this post! What a rational decision!. Even the smartest models’ factuality or refusal training can be compromised by simple changes to a prompt.
    AI Alignment Forum | 23 hours ago
  • A prayer for engaging in conflict
    Crosspost from my blog. Let these always be remembered: those who suffer, those who experience injustice, those who are silenced, those who are dispossessed, those who are aggressed upon, those who lose what they love, and those whose thriving is thwarted. May I not let hate into my heart; and May I not let my care for the aggressor prevent me from protecting what I love. May I always...
    LessWrong | 23 hours ago
  • Announcing ACE's 2025 Charity Recommendations
    16 minute read. We update our list of Recommended Charities annually. This year, we announced recommendations on November 4. Each year, hundreds of billions of animals are trapped in the food industry and killed for food—that is more than all the humans who have ever walked on the face of the Earth. 1.
    Effective Altruism Forum | 23 hours ago
  • Announcing the AIxBiosecurity Research Fellowship
    ERA, in partnership with the Cambridge Biosecurity Hub, is launching a new 8-week, full-time AIxBiosecurity research fellowship dedicated to tackling biosecurity risks amplified by recent advances in frontier AI capabilities. This fully funded programme equips researchers to investigate ways to reduce extreme risks from engineered and natural biological threats amid rapidly advancing...
    Effective Altruism Forum | 23 hours ago
  • Election Day Marketeering
    Above the Fold covers an off-year election day
    Manifold Markets | 24 hours ago
  • Announcing ACE's 2025 Charity Recommendations
    16 minute read. We update our list of Recommended Charities annually. This year, we announced recommendations on November 4. Each year, hundreds of billions of animals are trapped in the food industry and killed for food —that is more than all the humans who have ever walked on the face of the Earth. 1.
    Animal Advocacy Forum | 24 hours ago
  • Thoughts by a non-economist on AI and economics
    Crossposted on lesswrong Modern humans first emerged about 100,000 years ago. For the next 99,800 years or so, nothing happened. Well, not quite nothing. There were wars, political intrigue, the invention of agriculture — but none of that stuff had much effect on the quality of people’s lives.
    Windows On Theory | 1 days ago
  • The Last Stop on the Crazy Train
    How to react to a world full of conclusions with potentially astronomical stakes
    Bentham's Newsletter | 1 days ago
  • The Quiet Power Of Research: The Challenge Of Evaluating The Impact Of Research On Animal Advocacy
    In this blog, we share results from Animal Charity Evaluators’ review of Faunalytics, highlight where we align and diverge, and reflect on the evolving role of research in animal advocacy. The post The Quiet Power Of Research: The Challenge Of Evaluating The Impact Of Research On Animal Advocacy appeared first on Faunalytics.
    Faunalytics | 1 days ago
  • Why we need to think about taxing AI
    Large-scale AI-driven unemployment could hit government spending without innovative changes to the tax system
    Transformer | 1 days ago
  • Oaths aren't about oaths, they're about performative speech acts
    The Anglosphere has a rather cavalier attitude towards oaths.
    Thing of Things | 1 days ago
  • E.U. Scientific Body Finds Fur Farming Incompatible With Animal Welfare
    Following an extensive review, the European Food Safety Authority has concluded that the welfare of animals used for fur can’t be protected in the current cage-based systems. The post E.U. Scientific Body Finds Fur Farming Incompatible With Animal Welfare appeared first on Faunalytics.
    Faunalytics | 1 days ago
  • Only 2 days left to apply for the Documentary Research Grant (Deadline: 6th Nov)
    EVFA’s Documentary Research Grant backs nonfiction projects at the moment when research, access, and story exploration define what a film can become! . Each selected filmmaker receives an $8,000 USD grant for feasibility research, access-building, and core preparatory materials.
    Animal Advocacy Forum | 1 days ago
  • Readers from Canada, Australia and the EU can now subscribe to Works in Progress
    Our first print edition ships in two weeks.
    The Works in Progress Newsletter | 1 days ago
  • New political advocacy role at the Australian Alliance for Animals
    Hi FAST friends,. The Australian Alliance for Animals is recruiting for a new political advocacy role based in Sydney, Australia. This is an exciting opportunity for an enthusiastic and strategic advocate to drive systemic change for animals by developing advocacy strategies, building coalitions, and influencing decision-makers at the highest levels of state government. Contract: Full Time,...
    Animal Advocacy Forum | 1 days ago
  • Useful conversations & resources from our Slack community
    Hive Slack Threads: October
    Hive | 1 days ago
  • Wars. Climate Denial. Cruelty. Still, We Choose Light.
    We recently celebrated Diwali, the biggest festival in India, and it is all about hope, light, and the spirit of life. On this occasion, I wanted to share what keeps me going. If you’ve been feeling burned out from witnessing animal cruelty or from everything else wrong with the world, like raging wars and leaders […]. The post Wars. Climate Denial. Cruelty. Still, We Choose Light.
    Vegan Outreach | 1 days ago
  • A quiz to raise awareness of malaria prevention measures
    Target Malaria Uganda recently conducted a community focused quiz competition aimed at raising awareness about malaria prevention and enhancing understanding of the project’s research work. This event took place in two of the project island sites: Lwazi Jaana in Kalangala District and Kansambwe, Nsadzi Island in Mukono District. Held over two consecutive days in each […].
    Target Malaria | 1 days ago
  • Research Reflections
    Over the decade I've spent working on AI safety, I've felt an overall trend of divergence; research partnerships starting out with a sense of a common project, then slowly drifting apart over time. It has been frequently said that AI safety is a pre-paradigmatic field.
    LessWrong | 1 days ago
  • OSWorld — AI computer use capabilities
    Tasks are simple, many don't require GUIs, and success often hinges on interpreting ambiguous instructions. The benchmark is also not stable over time.
    Epoch Newsletter | 1 days ago
  • Do Small Protests Work?
    TLDR: The available evidence is weak. It looks like small protests may be effective at garnering support among the general public. Policy-makers appear to be more sensitive to protest size, and it’s not clear whether small protests have a positive or negative effect on their perception.
    Philosophical Multicore | 1 days ago
  • Deforestation in the Brazilian Amazon has fallen again in 2025
    Progress on deforestation, but increased threats from wildfires.
    Sustainability by Numbers | 1 days ago
  • Consistency Training Helps Stop Sycophancy and Jailbreaks
    The Pond | 1 days ago
  • The Zen Of Maxent As A Generalization Of Bayes Updates
    Jaynes’ Widget Problem : How Do We Update On An Expected Value?. Mr A manages a widget factory. The factory produces widgets of three colors - red, yellow, green - and part of Mr A’s job is to decide how many widgets to paint each color.
    LessWrong | 2 days ago
  • I ate bear fat with honey and salt flakes, to prove a point
    And it was surprisingly good. Based on an old tweet from Eliezer Yudkowsky, I decided to buy a jar of bear fat online, and make a treat for the people at Inkhaven. It was surprisingly good. My post discusses how that happened, and a bit about the implications for Eliezer's thesis.
    LessWrong | 2 days ago
  • Actualizamos nuestros cálculos de impacto - Noviembre de 2025
    Ayuda Efectiva | 2 days ago
  • What you need to know about AI data centers
    Epoch Blog | 2 days ago
  • OpenAI AWS $38B deal 💰, Facebook Dating ❤️, htmx 4 👨‍💻
    TLDR AI | 2 days ago
  • Introducing the Frontier Data Centers Hub
    Epoch Blog | 2 days ago
  • Sustainable Dining Training for Jewish Communities
    ​From Kitchen to Climate: Sustainable Dining Training 📅 Date: ​T​uesday ​N​ov ​4th ⏰ Time: 1​2​ pm PT / ​3​ pm ET 📍 Location: Zoom 🔗 Register here (free): https://adamah.tfaforms.net/341​. ​Program Description: Join ​Adamah and the Center for Jewish Food Ethics for a Sustainable Dining Training on Nov 4th!
    Animal Advocacy Forum | 2 days ago
  • Anima International is hiring in Norway! Check out our three open positions.
    Join our team and help create a better world for millions of animals! We’re currently hiring for three full-time positions in our Norwegian team: 📢 Campaigns & Communications Generalist Be on the front lines convincing major Norwegian food companies to eliminate the most severe causes of animal suffering. 🧰 Operations Generalist Help ensure our organization runs effectively and efficiently...
    Animal Advocacy Forum | 2 days ago
  • Policy Specialist
    Role Summary. As a Policy Specialist (all genders) in the field of Sustainable Public Food Procurement, you will be part of the German Policy Team and responsible both for developing policy proposals that support the implementation of the national nutrition strategy “Good Food for Germany” at state and municipal level, and for engaging in dialogue with political representatives and public...
    Animal Advocacy Forum | 2 days ago
  • V-Label Senior Quality Manager
    Role Summary. The V-Label is an internationally recognized trademark, protected since 1996, that identifies vegetarian and vegan products. In Germany, ProVeg e. V. is responsible for awarding the V-Label. ProVeg is a food awareness organization committed to transforming the global food system by replacing animal-based products with plant-based and cultivated alternatives. Do you want to...
    Animal Advocacy Forum | 2 days ago
  • New SSIR article features Senterra’s approach to scaling impact for animals
    Dear colleagues,. As part of our efforts to bring more donors and funding into the farmed animal protection movement, Senterra (formerly Farmed Animal Funders) just published an article in the Stanford Social Innovation Review (SSIR) about our funder community and ways we’re coordinating to drive outsized impact.
    Animal Advocacy Forum | 2 days ago
  • New SSIR article features Senterra’s approach to scaling impact for animals
    Dear colleagues,. As part of our efforts to bring more donors and funding into the farmed animal protection movement, Senterra (formerly Farmed Animal Funders) just published an article in the Stanford Social Innovation Review (SSIR) about our funder community and ways we’re coordinating to drive outsized impact.
    Animal Advocacy Forum | 2 days ago
  • What's up with Anthropic predicting AGI by early 2027?
    As far as I'm aware, Anthropic is the only AI company with official AGI timelines : they expect AGI by early 2027. In their recommendations (from March 2025) to the OSTP for the AI action plan they say: As our CEO Dario Amodei writes in 'Machines of Loving Grace', we expect powerful AI systems will emerge in late 2026 or early 2027. Powerful AI systems will have the following properties:
    LessWrong | 2 days ago
  • Common Misconceptions About Anger?
    People often say things like the following about anger’s relationship to other emotions – but are they B.S.? They say: While there is debate about these ideas among people in the field, my opinion is that these statements are misleading and, in some cases, wrong. I think these statements can promote misunderstandings about the nature […]...
    Optimize Everything | 2 days ago
  • The Tale of the Top-Tier Intellect
    Once upon a time in the medium-small town of Skewers, Washington, there lived a 52-year-old man by the name of Mr. Humman, who considered himself a top-tier chess-player. Now, Mr. Humman was not generally considered the strongest player in town; if you asked the other inhabitants of Skewers, most of them would've named Mr. Neumann as their town's chess champion.
    LessWrong | 2 days ago
  • Leaving Open Philanthropy, going to Anthropic
    (Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.). Last Friday was my last day at Open Philanthropy. I’ll be starting a new role at Anthropic in mid-November, helping with the design of Claude’s character/constitution/spec.
    LessWrong | 2 days ago
  • The Strategic Calculus of AI R&D Automation
    When AI automates AI development, the question shifts from ‘What can we build?’ to ‘What should we build first?’ As difficulty declines, differential value dominates.
    AI Prospects: Toward Global Goal Convergence | 2 days ago
  • Trying to understand my own cognitive edge
    I applaud Eliezer for trying to make himself redundant, and think it's something every intellectually successful person should spend some time and effort on. I've been trying to understand my own "edge" or "moat", or cognitive traits that are responsible for whatever success I've had, in the hope of finding a way to reproduce it in others, but I'm having trouble understanding a part of it, and...
    LessWrong | 2 days ago
  • The Unreasonable Effectiveness of Fiction
    [Meta: This is Max Harms. I wrote a novel about China and AGI, which comes out today. This essay from my fiction newsletter has been slightly modified for LessWrong.]. In the summer of 1983, Ronald Reagan sat down to watch the film War Games, starring Matthew Broderick as a teen hacker.
    LessWrong | 2 days ago
  • Publishing academic papers on transformative AI is a nightmare
    I am a professor of economics. Throughout my career, I was mostly working on economic growth theory, and this eventually brought me to the topic of transformative AI / AGI / superintelligence. Nowadays my work focuses mostly on the promises and threats of this emerging disruptive technology.
    LessWrong | 2 days ago
  • Leaving Open Philanthropy, going to Anthropic
    (Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.). Last Friday was my last day at Open Philanthropy. I’ll be starting a new role at Anthropic in mid-November, helping with the design of Claude’s character/constitution/spec.
    Effective Altruism Forum | 2 days ago
  • How and why you should make your home smart (it's cheap and secure!)
    Your average day starts with an alarm on your phone. Sometimes, you wake up a couple of minutes before it sounds. Sometimes, you find the button to snooze it. Sometimes, you’re already on the phone and it appears as a notification. But when you finally stop it, the lights in your room turn on and you start your day. You walk out of your room.
    LessWrong | 2 days ago
  • Could a Garland Fund 2.0 Upend America Today?
    Editors’ Note: David Pozen continues HistPhil’s book forum on John Witt’s The Radical Fund: How a Band of Visionaries and a Million Dollars Upended America (Simon & Schuster, 2025). A version of this post originally appeared on the Balkinization blog, which is conducting a forum on Witt’s book as well, with some outstanding contributions by … Continue reading →...
    HistPhil | 2 days ago
  • 🟩 Trump threatens to send troops to Nigeria and denies Venezuela attack plans, China-US trade détente || Global Risks Weekly Roundup #44/2025
    Iran is carrying out more construction in and around a mountainous nuclear site. The Rapid Support Forces (RSF) have captured the city of el-Fasher in Sudan.
    Sentinel | 2 days ago
  • Leaving Open Philanthropy, going to Anthropic
    On a career move, and on AI-safety-focused people working at AI companies.
    Joe Carlsmith | 2 days ago
  • Leftists want real jobs on the leftist commune
    This is one of the most pedantic posts I’ve ever written.
    Thing of Things | 2 days ago
  • What's up with Anthropic predicting AGI by early 2027?
    I operationalize Anthropic's prediction of "powerful AI" and explain why I'm skeptical
    Redwood Research | 2 days ago
  • Writing For The AIs
    Astral Codex Ten | 2 days ago
  • Leaving Open Philanthropy, going to Anthropic
    On a career move, and on AI-safety-focused people working at AI companies. Text version here: https://joecarlsmith.com/2025/11/03/leaving-open-philanthropy-going-to-anthropic/
    Joe Carlsmith Audio | 2 days ago
  • Blogging: A Balanced View
    It's not all doom and gloom
    Bentham's Newsletter | 2 days ago
  • Macho Meals: How Masculinity Drives Men’s Meat Attachment
    What does masculinity have to do with meat? Men’s attachment to meat is tied to traditional masculine ideals, but reframing plant-based eating as strong and self-directed could help change that. The post Macho Meals: How Masculinity Drives Men’s Meat Attachment appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • A glimpse of the other side
    I like to wake up early to watch the sunrise. The sun hits the distant city first, the little sliver of it I can see through the trees. The buildings light up copper against the pale pink sky, and that little sliver is the only bit of saturation in an otherwise grey visual field. Then the sun starts to rise over the hill behind me.
    LessWrong | 2 days ago
  • Recruitment is extremely important and impactful. Some people should be completely obsessed with it.
    Cross-post from Good Structures. Over the last few years, I helped run several dozen hiring rounds for around 15 high-impact organizations. I've also spent the last few months talking with organizations about their recruitment. I've noticed three recurring themes:
    Effective Altruism Forum | 2 days ago
  • Feedback Loops Rule Everything Around Me
    Feeeeed meeee
    Atoms vs Bits | 2 days ago
  • The EU AI Act Newsletter #89: AI Standards Acceleration Updates
    CEN and CENELEC have announced exceptional measures to speed up the development of European standards supporting the AI Act.
    The EU AI Act Newsletter | 2 days ago
  • ChinAI #334: How AI is "Transforming" a Chinese University's Humanities Program
    Greetings from a world where…...
    ChinAI Newsletter | 2 days ago
  • Compassion in World Farming Southern Africa is Hiring: Communications Officer
    Compassion in World Farming International is a global movement transforming the future of food and farming. We’re recruiting for a passionate and skilled Communications Officer to help amplify our voice and impact across Southern Africa. . Communications Officer – Southern Africa. Role Type: Contract until end of March 2026- Part-time 2 days per week. Location: South Africa - Remote.
    Animal Advocacy Forum | 2 days ago
  • Open Thread 406
    Astral Codex Ten | 2 days ago
  • Why peanut butter is back on the kids’ menu
    If, like me, you’re a parent of a young child, there’s one thing you’ve come to fear above all else. (And no, it’s not “Golden” from KPop Demon Hunters played for the 10,000th time, though that’s a close second.). It’s the humble peanut. Even if your child isn’t allergic to the nuts, past surveys have […]...
    Future Perfect | 2 days ago
  • You don’t need better boundaries. You need a better framework.
    Your Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. It’s based on value pluralism — the idea that each of us has multiple values that are equally valid but that often conflict with each other. To submit a question, fill out this anonymous form. Here’s this week’s question from a […]...
    Future Perfect | 2 days ago
  • Erasmus: Social Engineering at Scale
    Sofia Corradi, a.k.a. Mamma Erasmus (2020)When Sofia Corradi died on October 17th, the press was full of obituaries for the spiritual mother of Erasmus, the European student exchange programme, or, in the words of Umberto Eco, “that thing where a Catalan boy goes to study in Belgium, meets a Flemish girl, falls in love with her, marries her, and starts a European family.”.
    LessWrong | 2 days ago
  • Community Health Workers Transform Reproductive Health Service Delivery in Wakiso District, Uganda
    The post Community Health Workers Transform Reproductive Health Service Delivery in Wakiso District, Uganda appeared first on Living Goods.
    Living Goods | 2 days ago
  • "What's hard about this? What can I do about that?" (Recursive)
    Third in a series of short rationality prompts... . My opening rationality move is often "What's my goal?". It is closely followed by: "Why is this hard? And, what can I do about that?". If you're busting out deliberate "rationality" tools (instead of running on intuition or copying your neighbors), something about your situation is probably difficult.
    LessWrong | 2 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.