Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Stall ohne Dach: Auslauf oder nicht?
    Sentience Politics | 28 minutes ago
  • You are going to get priced out of the best AI coding tools
    The best AI tools will become far more expensive. Andy Warhol famously said:
    AI Safety Takes | 2 hours ago
  • Notes on forecasting strategy
    Becoming a top forecaster may not work exactly how you think it does...
    Samstack | 3 hours ago
  • Thoughts by a non-economist on AI and economics
    [Crossposted on Windows In Theory] . “Modern humans first emerged about 100,000 years ago. For the next 99,800 years or so, nothing happened. Well, not quite nothing. There were wars, political intrigue, the invention of agriculture -- but none of that stuff had much effect on the quality of people's lives.
    LessWrong | 3 hours ago
  • New CLTC Guide Focuses on Cybersecurity for Mutual Aid Organizations
    The UC Berkeley Cybersecurity Clinic and Fight for the Future have collaborated to create a guide aimed at enhancing the cybersecurity practices of mutual aid organizations. Released as part of the CLTC White Paper Series, the guide — Securing Mutual Aid: Cybersecurity Practices and Design Principles for Financial Technology — outlines best practices to help mutual aids use financial...
    Center for Long-Term Cybersecurity | 5 hours ago
  • Heroic Responsibility
    Meta: Heroic responsibility is a standard concept on LessWrong. I was surprised to find that we don't have a post explaining it to people not already deep in the cultural context, so I wrote this one. Suppose I decide to start a business - specifically a car dealership. One day there's a problem: we sold a car with a bad thingamabob.
    LessWrong | 6 hours ago
  • Temporary Policy Lead at NARN
    Policy and Programs Lead (temp). Organization: Northwest Animal Rights Network (NARN) Location: Remote (Washington-based preferred) Hours: 30 hours per week Compensation: $21-$26/hr Reports to: NARN Board. About NARN. The Northwest Animal Rights Network (NARN) advocates for the rights and well-being of all animals through education, advocacy, and community building.
    Animal Advocacy Forum | 7 hours ago
  • Legible vs. Illegible AI Safety Problems
    Some AI safety problems are legible (obvious or understandable) to company leaders and government policymakers, implying they are unlikely to deploy or allow deployment of an AI while those problems remain open (i.e., appear unsolved according to the information they have access to).
    LessWrong | 7 hours ago
  • Legible vs. Illegible AI Safety Problems
    Some AI safety problems are legible (obvious or understandable) to company leaders and government policymakers, implying they are unlikely to deploy or allow deployment of an AI while those problems remain open (i.e., appear unsolved according to the information they have access to).
    AI Alignment Forum | 8 hours ago
  • Scale-up: the neglected bottleneck facing alternative proteins
    Hi everyone! I'm Alex, Managing Director of the Good Food Institute Europe. Thanks so much for taking the time to read my post on why scale-up is so important for the future of alternative proteins. This is a topic we're really passionate about at GFI Europe, and one that's becoming increasingly important as the sector matures. I’ll be around in the comments and happy to answer any questions.
    Effective Altruism Forum | 8 hours ago
  • Why We Don’t Act On Our Values (And What to Do About It)
    Discover why people often fail to act on their own values — and how to close the gap between what you care about and what you do. Learn how despair, denial, and defiance hold us back, and find research-based ways to live more in line with your principles.
    Clearer Thinking | 9 hours ago
  • GDM: Consistency Training Helps Limit Sycophancy and Jailbreaks in Gemini 2.5 Flash
    Authors: Alex Irpan* and Alex Turner*, Mark Kurzeja, David Elson, and Rohin Shah. You’re absolutely right to start reading this post! What a rational decision!. Even the smartest models’ factuality or refusal training can be compromised by simple changes to a prompt.
    LessWrong | 10 hours ago
  • Introducing the Frontier Data Centers Hub
    We announce our new Frontier Data Centers Hub, a database tracking large AI data centers using satellite and permit data to show compute, power use, and construction timelines.
    Epoch Newsletter | 11 hours ago
  • Comparative advantage & AI
    Also on my substack. I was recently saddened to see that Seb Krier – who's a lead on the Google DeepMind governance team – created a simple website apparently endorsing the idea that Ricardian comparative advantage will provide humans with jobs in the time of ASI. The argument that comparative advantage means advanced AI is automatically safe is pretty old and has been addressed multiple times.
    LessWrong | 12 hours ago
  • GDM: Consistency Training Helps Limit Sycophancy and Jailbreaks in Gemini 2.5 Flash
    Authors: Alex Irpan* and Alex Turner*, Mark Kurzeja, David Elson, and Rohin Shah. You’re absolutely right to start reading this post! What a rational decision!. Even the smartest models’ factuality or refusal training can be compromised by simple changes to a prompt.
    AI Alignment Forum | 12 hours ago
  • A prayer for engaging in conflict
    Crosspost from my blog. Let these always be remembered: those who suffer, those who experience injustice, those who are silenced, those who are dispossessed, those who are aggressed upon, those who lose what they love, and those whose thriving is thwarted. May I not let hate into my heart; and May I not let my care for the aggressor prevent me from protecting what I love. May I always...
    LessWrong | 12 hours ago
  • Announcing ACE's 2025 Charity Recommendations
    16 minute read. We update our list of Recommended Charities annually. This year, we announced recommendations on November 4. Each year, hundreds of billions of animals are trapped in the food industry and killed for food—that is more than all the humans who have ever walked on the face of the Earth. 1.
    Effective Altruism Forum | 12 hours ago
  • Announcing the AIxBiosecurity Research Fellowship
    ERA, in partnership with the Cambridge Biosecurity Hub, is launching a new 8-week, full-time AIxBiosecurity research fellowship dedicated to tackling biosecurity risks amplified by recent advances in frontier AI capabilities. This fully funded programme equips researchers to investigate ways to reduce extreme risks from engineered and natural biological threats amid rapidly advancing...
    Effective Altruism Forum | 12 hours ago
  • Election Day Marketeering
    Above the Fold covers an off-year election day
    Manifold Markets | 12 hours ago
  • Announcing ACE's 2025 Charity Recommendations
    16 minute read. We update our list of Recommended Charities annually. This year, we announced recommendations on November 4. Each year, hundreds of billions of animals are trapped in the food industry and killed for food —that is more than all the humans who have ever walked on the face of the Earth. 1.
    Animal Advocacy Forum | 13 hours ago
  • Thoughts by a non-economist on AI and economics
    Crossposted on lesswrong Modern humans first emerged about 100,000 years ago. For the next 99,800 years or so, nothing happened. Well, not quite nothing. There were wars, political intrigue, the invention of agriculture — but none of that stuff had much effect on the quality of people’s lives.
    Windows On Theory | 14 hours ago
  • The Last Stop on the Crazy Train
    How to react to a world full of conclusions with potentially astronomical stakes
    Bentham's Newsletter | 14 hours ago
  • The Quiet Power Of Research: The Challenge Of Evaluating The Impact Of Research On Animal Advocacy
    In this blog, we share results from Animal Charity Evaluators’ review of Faunalytics, highlight where we align and diverge, and reflect on the evolving role of research in animal advocacy. The post The Quiet Power Of Research: The Challenge Of Evaluating The Impact Of Research On Animal Advocacy appeared first on Faunalytics.
    Faunalytics | 14 hours ago
  • Why we need to think about taxing AI
    Large-scale AI-driven unemployment could hit government spending without innovative changes to the tax system
    Transformer | 15 hours ago
  • Oaths aren't about oaths, they're about performative speech acts
    The Anglosphere has a rather cavalier attitude towards oaths.
    Thing of Things | 15 hours ago
  • E.U. Scientific Body Finds Fur Farming Incompatible With Animal Welfare
    Following an extensive review, the European Food Safety Authority has concluded that the welfare of animals used for fur can’t be protected in the current cage-based systems. The post E.U. Scientific Body Finds Fur Farming Incompatible With Animal Welfare appeared first on Faunalytics.
    Faunalytics | 15 hours ago
  • Only 2 days left to apply for the Documentary Research Grant (Deadline: 6th Nov)
    EVFA’s Documentary Research Grant backs nonfiction projects at the moment when research, access, and story exploration define what a film can become! . Each selected filmmaker receives an $8,000 USD grant for feasibility research, access-building, and core preparatory materials.
    Animal Advocacy Forum | 15 hours ago
  • Readers from Canada, Australia and the EU can now subscribe to Works in Progress
    Our first print edition ships in two weeks.
    The Works in Progress Newsletter | 16 hours ago
  • New political advocacy role at the Australian Alliance for Animals
    Hi FAST friends,. The Australian Alliance for Animals is recruiting for a new political advocacy role based in Sydney, Australia. This is an exciting opportunity for an enthusiastic and strategic advocate to drive systemic change for animals by developing advocacy strategies, building coalitions, and influencing decision-makers at the highest levels of state government. Contract: Full Time,...
    Animal Advocacy Forum | 18 hours ago
  • Useful conversations & resources from our Slack community
    Hive Slack Threads: October
    Hive | 18 hours ago
  • Wars. Climate Denial. Cruelty. Still, We Choose Light.
    We recently celebrated Diwali, the biggest festival in India, and it is all about hope, light, and the spirit of life. On this occasion, I wanted to share what keeps me going. If you’ve been feeling burned out from witnessing animal cruelty or from everything else wrong with the world, like raging wars and leaders […]. The post Wars. Climate Denial. Cruelty. Still, We Choose Light.
    Vegan Outreach | 21 hours ago
  • A quiz to raise awareness of malaria prevention measures
    Target Malaria Uganda recently conducted a community focused quiz competition aimed at raising awareness about malaria prevention and enhancing understanding of the project’s research work. This event took place in two of the project island sites: Lwazi Jaana in Kalangala District and Kansambwe, Nsadzi Island in Mukono District. Held over two consecutive days in each […].
    Target Malaria | 22 hours ago
  • Research Reflections
    Over the decade I've spent working on AI safety, I've felt an overall trend of divergence; research partnerships starting out with a sense of a common project, then slowly drifting apart over time. It has been frequently said that AI safety is a pre-paradigmatic field.
    LessWrong | 22 hours ago
  • OSWorld — AI computer use capabilities
    Tasks are simple, many don't require GUIs, and success often hinges on interpreting ambiguous instructions. The benchmark is also not stable over time.
    Epoch Newsletter | 22 hours ago
  • Do Small Protests Work?
    TLDR: The available evidence is weak. It looks like small protests may be effective at garnering support among the general public. Policy-makers appear to be more sensitive to protest size, and it’s not clear whether small protests have a positive or negative effect on their perception.
    Philosophical Multicore | 23 hours ago
  • Deforestation in the Brazilian Amazon has fallen again in 2025
    Progress on deforestation, but increased threats from wildfires.
    Sustainability by Numbers | 23 hours ago
  • Consistency Training Helps Stop Sycophancy and Jailbreaks
    The Pond | 1 days ago
  • The Zen Of Maxent As A Generalization Of Bayes Updates
    Jaynes’ Widget Problem : How Do We Update On An Expected Value?. Mr A manages a widget factory. The factory produces widgets of three colors - red, yellow, green - and part of Mr A’s job is to decide how many widgets to paint each color.
    LessWrong | 1 days ago
  • I ate bear fat with honey and salt flakes, to prove a point
    And it was surprisingly good. Based on an old tweet from Eliezer Yudkowsky, I decided to buy a jar of bear fat online, and make a treat for the people at Inkhaven. It was surprisingly good. My post discusses how that happened, and a bit about the implications for Eliezer's thesis.
    LessWrong | 1 days ago
  • What you need to know about AI data centers
    Epoch Blog | 1 days ago
  • Actualizamos nuestros cálculos de impacto - Noviembre de 2025
    Ayuda Efectiva | 1 days ago
  • Introducing the Frontier Data Centers Hub
    Epoch Blog | 1 days ago
  • OpenAI AWS $38B deal 💰, Facebook Dating ❤️, htmx 4 👨‍💻
    TLDR AI | 1 days ago
  • Sustainable Dining Training for Jewish Communities
    ​From Kitchen to Climate: Sustainable Dining Training 📅 Date: ​T​uesday ​N​ov ​4th ⏰ Time: 1​2​ pm PT / ​3​ pm ET 📍 Location: Zoom 🔗 Register here (free): https://adamah.tfaforms.net/341​. ​Program Description: Join ​Adamah and the Center for Jewish Food Ethics for a Sustainable Dining Training on Nov 4th!
    Animal Advocacy Forum | 1 days ago
  • Anima International is hiring in Norway! Check out our three open positions.
    Join our team and help create a better world for millions of animals! We’re currently hiring for three full-time positions in our Norwegian team: 📢 Campaigns & Communications Generalist Be on the front lines convincing major Norwegian food companies to eliminate the most severe causes of animal suffering. 🧰 Operations Generalist Help ensure our organization runs effectively and efficiently...
    Animal Advocacy Forum | 1 days ago
  • Policy Specialist
    Role Summary. As a Policy Specialist (all genders) in the field of Sustainable Public Food Procurement, you will be part of the German Policy Team and responsible both for developing policy proposals that support the implementation of the national nutrition strategy “Good Food for Germany” at state and municipal level, and for engaging in dialogue with political representatives and public...
    Animal Advocacy Forum | 1 days ago
  • V-Label Senior Quality Manager
    Role Summary. The V-Label is an internationally recognized trademark, protected since 1996, that identifies vegetarian and vegan products. In Germany, ProVeg e. V. is responsible for awarding the V-Label. ProVeg is a food awareness organization committed to transforming the global food system by replacing animal-based products with plant-based and cultivated alternatives. Do you want to...
    Animal Advocacy Forum | 1 days ago
  • New SSIR article features Senterra’s approach to scaling impact for animals
    Dear colleagues,. As part of our efforts to bring more donors and funding into the farmed animal protection movement, Senterra (formerly Farmed Animal Funders) just published an article in the Stanford Social Innovation Review (SSIR) about our funder community and ways we’re coordinating to drive outsized impact.
    Animal Advocacy Forum | 1 days ago
  • New SSIR article features Senterra’s approach to scaling impact for animals
    Dear colleagues,. As part of our efforts to bring more donors and funding into the farmed animal protection movement, Senterra (formerly Farmed Animal Funders) just published an article in the Stanford Social Innovation Review (SSIR) about our funder community and ways we’re coordinating to drive outsized impact.
    Animal Advocacy Forum | 1 days ago
  • What's up with Anthropic predicting AGI by early 2027?
    As far as I'm aware, Anthropic is the only AI company with official AGI timelines : they expect AGI by early 2027. In their recommendations (from March 2025) to the OSTP for the AI action plan they say: As our CEO Dario Amodei writes in 'Machines of Loving Grace', we expect powerful AI systems will emerge in late 2026 or early 2027. Powerful AI systems will have the following properties:
    LessWrong | 1 days ago
  • Common Misconceptions About Anger?
    People often say things like the following about anger’s relationship to other emotions – but are they B.S.? They say: While there is debate about these ideas among people in the field, my opinion is that these statements are misleading and, in some cases, wrong. I think these statements can promote misunderstandings about the nature […]...
    Optimize Everything | 1 days ago
  • The Tale of the Top-Tier Intellect
    Once upon a time in the medium-small town of Skewers, Washington, there lived a 52-year-old man by the name of Mr. Humman, who considered himself a top-tier chess-player. Now, Mr. Humman was not generally considered the strongest player in town; if you asked the other inhabitants of Skewers, most of them would've named Mr. Neumann as their town's chess champion.
    LessWrong | 1 days ago
  • Leaving Open Philanthropy, going to Anthropic
    (Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.). Last Friday was my last day at Open Philanthropy. I’ll be starting a new role at Anthropic in mid-November, helping with the design of Claude’s character/constitution/spec.
    LessWrong | 1 days ago
  • The Strategic Calculus of AI R&D Automation
    When AI automates AI development, the question shifts from ‘What can we build?’ to ‘What should we build first?’ As difficulty declines, differential value dominates.
    AI Prospects: Toward Global Goal Convergence | 1 days ago
  • Trying to understand my own cognitive edge
    I applaud Eliezer for trying to make himself redundant, and think it's something every intellectually successful person should spend some time and effort on. I've been trying to understand my own "edge" or "moat", or cognitive traits that are responsible for whatever success I've had, in the hope of finding a way to reproduce it in others, but I'm having trouble understanding a part of it, and...
    LessWrong | 1 days ago
  • The Unreasonable Effectiveness of Fiction
    [Meta: This is Max Harms. I wrote a novel about China and AGI, which comes out today. This essay from my fiction newsletter has been slightly modified for LessWrong.]. In the summer of 1983, Ronald Reagan sat down to watch the film War Games, starring Matthew Broderick as a teen hacker.
    LessWrong | 1 days ago
  • Publishing academic papers on transformative AI is a nightmare
    I am a professor of economics. Throughout my career, I was mostly working on economic growth theory, and this eventually brought me to the topic of transformative AI / AGI / superintelligence. Nowadays my work focuses mostly on the promises and threats of this emerging disruptive technology.
    LessWrong | 1 days ago
  • Leaving Open Philanthropy, going to Anthropic
    (Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.). Last Friday was my last day at Open Philanthropy. I’ll be starting a new role at Anthropic in mid-November, helping with the design of Claude’s character/constitution/spec.
    Effective Altruism Forum | 1 days ago
  • How and why you should make your home smart (it's cheap and secure!)
    Your average day starts with an alarm on your phone. Sometimes, you wake up a couple of minutes before it sounds. Sometimes, you find the button to snooze it. Sometimes, you’re already on the phone and it appears as a notification. But when you finally stop it, the lights in your room turn on and you start your day. You walk out of your room.
    LessWrong | 1 days ago
  • Could a Garland Fund 2.0 Upend America Today?
    Editors’ Note: David Pozen continues HistPhil’s book forum on John Witt’s The Radical Fund: How a Band of Visionaries and a Million Dollars Upended America (Simon & Schuster, 2025). A version of this post originally appeared on the Balkinization blog, which is conducting a forum on Witt’s book as well, with some outstanding contributions by … Continue reading →...
    HistPhil | 1 days ago
  • 🟩 Trump threatens to send troops to Nigeria and denies Venezuela attack plans, China-US trade détente || Global Risks Weekly Roundup #44/2025
    Iran is carrying out more construction in and around a mountainous nuclear site. The Rapid Support Forces (RSF) have captured the city of el-Fasher in Sudan.
    Sentinel | 2 days ago
  • Leaving Open Philanthropy, going to Anthropic
    On a career move, and on AI-safety-focused people working at AI companies.
    Joe Carlsmith | 2 days ago
  • Leftists want real jobs on the leftist commune
    This is one of the most pedantic posts I’ve ever written.
    Thing of Things | 2 days ago
  • What's up with Anthropic predicting AGI by early 2027?
    I operationalize Anthropic's prediction of "powerful AI" and explain why I'm skeptical
    Redwood Research | 2 days ago
  • Writing For The AIs
    Astral Codex Ten | 2 days ago
  • Leaving Open Philanthropy, going to Anthropic
    On a career move, and on AI-safety-focused people working at AI companies. Text version here: https://joecarlsmith.com/2025/11/03/leaving-open-philanthropy-going-to-anthropic/
    Joe Carlsmith Audio | 2 days ago
  • Blogging: A Balanced View
    It's not all doom and gloom
    Bentham's Newsletter | 2 days ago
  • Macho Meals: How Masculinity Drives Men’s Meat Attachment
    What does masculinity have to do with meat? Men’s attachment to meat is tied to traditional masculine ideals, but reframing plant-based eating as strong and self-directed could help change that. The post Macho Meals: How Masculinity Drives Men’s Meat Attachment appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • A glimpse of the other side
    I like to wake up early to watch the sunrise. The sun hits the distant city first, the little sliver of it I can see through the trees. The buildings light up copper against the pale pink sky, and that little sliver is the only bit of saturation in an otherwise grey visual field. Then the sun starts to rise over the hill behind me.
    LessWrong | 2 days ago
  • Recruitment is extremely important and impactful. Some people should be completely obsessed with it.
    Cross-post from Good Structures. Over the last few years, I helped run several dozen hiring rounds for around 15 high-impact organizations. I've also spent the last few months talking with organizations about their recruitment. I've noticed three recurring themes:
    Effective Altruism Forum | 2 days ago
  • Feedback Loops Rule Everything Around Me
    Feeeeed meeee
    Atoms vs Bits | 2 days ago
  • The EU AI Act Newsletter #89: AI Standards Acceleration Updates
    CEN and CENELEC have announced exceptional measures to speed up the development of European standards supporting the AI Act.
    The EU AI Act Newsletter | 2 days ago
  • ChinAI #334: How AI is "Transforming" a Chinese University's Humanities Program
    Greetings from a world where…...
    ChinAI Newsletter | 2 days ago
  • Compassion in World Farming Southern Africa is Hiring: Communications Officer
    Compassion in World Farming International is a global movement transforming the future of food and farming. We’re recruiting for a passionate and skilled Communications Officer to help amplify our voice and impact across Southern Africa. . Communications Officer – Southern Africa. Role Type: Contract until end of March 2026- Part-time 2 days per week. Location: South Africa - Remote.
    Animal Advocacy Forum | 2 days ago
  • Open Thread 406
    Astral Codex Ten | 2 days ago
  • Why peanut butter is back on the kids’ menu
    If, like me, you’re a parent of a young child, there’s one thing you’ve come to fear above all else. (And no, it’s not “Golden” from KPop Demon Hunters played for the 10,000th time, though that’s a close second.). It’s the humble peanut. Even if your child isn’t allergic to the nuts, past surveys have […]...
    Future Perfect | 2 days ago
  • You don’t need better boundaries. You need a better framework.
    Your Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. It’s based on value pluralism — the idea that each of us has multiple values that are equally valid but that often conflict with each other. To submit a question, fill out this anonymous form. Here’s this week’s question from a […]...
    Future Perfect | 2 days ago
  • Erasmus: Social Engineering at Scale
    Sofia Corradi, a.k.a. Mamma Erasmus (2020)When Sofia Corradi died on October 17th, the press was full of obituaries for the spiritual mother of Erasmus, the European student exchange programme, or, in the words of Umberto Eco, “that thing where a Catalan boy goes to study in Belgium, meets a Flemish girl, falls in love with her, marries her, and starts a European family.”.
    LessWrong | 2 days ago
  • Community Health Workers Transform Reproductive Health Service Delivery in Wakiso District, Uganda
    The post Community Health Workers Transform Reproductive Health Service Delivery in Wakiso District, Uganda appeared first on Living Goods.
    Living Goods | 2 days ago
  • "What's hard about this? What can I do about that?" (Recursive)
    Third in a series of short rationality prompts... . My opening rationality move is often "What's my goal?". It is closely followed by: "Why is this hard? And, what can I do about that?". If you're busting out deliberate "rationality" tools (instead of running on intuition or copying your neighbors), something about your situation is probably difficult.
    LessWrong | 2 days ago
  • Most Irish Foreign Aid Never Leaves the Country
    But, weirdly, this is fine (for now)
    The Fitzwilliam | 2 days ago
  • Overschakelen op kippenkweek: de grootste ramp in Vlaanderen
    Opiniestuk in De Standaard (3-11-2025) Vorige week lazen we in de krant een artikel dat niet al te veel ophef veroorzaakte. Niemand lijkt te beseffen dat het over veruit de grootste ramp in Vlaanderen gaat: West-Vlaamse boeren die overschakelen op … Lees verder →...
    The Rational Ethicist | 2 days ago
  • My Third Caffeine Self-Experiment
    Last year I did a caffeine cycling self-experiment and I determined that I don’t get habituated to caffeine when I drink coffee three days a week. I did a follow-up experiment where I upgraded to four days a week (Mon/Wed/Fri/Sat) and I found that I still don’t get habituated. For my current weekly routine, I have caffeine on Monday, Wednesday, Friday, and Saturday.
    Philosophical Multicore | 2 days ago
  • Lack of Social Grace is a Lack of Skill
    I. I have claimed that one of the fundamental questions of rationality is “what am I about to do and what will happen next?” One of the domains I ask this question the most is in social situations. There are a great many skills in the world. If I had the time and resources to do so, I’d want to master all of them.
    LessWrong | 2 days ago
  • The Case for DMT for Cluster Headaches: Practical Tips & Why It Deserves Urgent Scientific Attention
    Transcript: Using DMT to Abort Cluster Headaches May all beings be free from suffering, especially those who are trapped in hell. Welcome everybody. Today we’re going to talk about a pretty gnarly topic, but it’s a very important one. I think if we focus as a community and make direct, persistent action towards these goals, […]...
    Qualia Computing | 2 days ago
  • From sand to solar tent
    From sand to solar tent gloireri Mon, 11/03/2025 - 05:42 . Moma, Mozambique – When Islova Alberto Aly decided to venture into fish drying, her primary aim was to generate an income to support her children's education.
    Global Alliance for Improved Nutrition | 2 days ago
  • Against Subjectivism
    My second piece on summarizing Chappell’s summary of his summary of Parfit.
    Hauke’s Blog | 2 days ago
  • Why I Transitioned: A Case Study
    An Overture. Famously, trans people tend not to have great introspective clarity into their own motivations for transition. Intuitively, they tend to be quite aware of what they do and don't like about inhabiting their chosen bodies and gender roles. But when it comes to explaining the origins and intensity of those preferences, they almost universally to come up short.
    LessWrong | 2 days ago
  • Halfhaven halftime
    Halfhaven is a virtual blogger camp, an online alternative to Inkhaven Residency. The rules are simple: every day post max 1 article with min 500 words (or equivalent effort). try to get 30 by the end of November (but there are no hard lines). The invitation links keep expiring, the current one is: https://discord.gg/jrJPR3h6.
    LessWrong | 2 days ago
  • Is vaping less harmful than smoking, and does it help people quit?
    Answers to some of the most frequently asked questions about vaping and its effects.
    Our World in Data | 2 days ago
  • Gemini Siri 📱, SpaceX datacenters 🛰️, GitHub immutable releases 👨‍💻
    TLDR AI | 2 days ago
  • Human Values ≠ Goodness
    There is a temptation to simply define Goodness as Human Values, or vice versa. Alas, we do not get to choose the definitions of commonly used words; our attempted definitions will simply be wrong. Unless we stick to mathematics, we will end up sneaking in intuitions which do not follow from our so-called definitions, and thereby mislead ourselves.
    LessWrong | 2 days ago
  • Weak-To-Strong Generalization
    I will be discussing weak-to-strong generalization with Sahil on Monday, November 3rd, 2025, 11am Pacific Daylight Time. You can join the discussion with this link. Weak-to-strong generalization is an approach to alignment (and capabilities) which seeks to address the scarcity of human feedback by using a weak model to teach a strong model.
    AI Alignment Forum | 2 days ago
  • FTL travel and scientific realism
    It's November! I'm not doing Inkhaven, or NaNoWriMo (RIP), or writing a short story every day, or quitting shaving or anything else. But I (along with some housemates) am going to try to write a blog post of at least 500 words every day of the month. (Inkhaven is just down the street a bit and I'm hoping to benefit from some kind of proximity effect.). Today: Llamamoe on Discord complains about.
    LessWrong | 2 days ago
  • The Biggest Unsolved Problem in Philosophy of Science
    Should we believe our scientific theories?
    Bentham's Newsletter | 2 days ago
  • But Does Social Media Use Actually Cause Bad Mental Health?
    It’s interesting how studies on the negative effects of social media on mental health are mixed: some find an effect, some don’t (or only find a very small effect). Some take this as proof that social media is actually fine for mental health. My hypothesis is different. I think that the effects of social media […]...
    Optimize Everything | 2 days ago
  • Reflections on 4 years of meta-honesty
    Honesty is quite powerful in many cases: if you have a reputation for being honest, people will trust you more and your words will have more weight (or so the argument goes). Unfortunately, being extremely honest all the time is also pretty difficult. What happens when the Nazis come knocking and ask if you have jews in the basement?
    LessWrong | 2 days ago
  • Will Welfareans Get to Experience the Future?
    Cross-posted from my website. Epistemic status: This entire essay rests on two controversial premises (linear aggregation and antispeciesism) that I believe are quite robust, but I will not be able to convince anyone that they're true, so I'm not even going to try.
    Effective Altruism Forum | 3 days ago
  • Announcing our 2025 Charity Recommendations
    Discover the 2025 charity recommendations from Animal Charity Evaluators — top animal welfare charities, impact analysis, and giving insights today! …  Read more...
    Animal Charity Evaluators | 3 days ago
  • Networking is less dumb than you might think
    I used to think of networking somewhat like this:
    Thing of Things | 3 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.