Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • The Unjournal: Bridging the Rigor/Impact Gaps for EA-relevant Research Questions
    Overview. The Unjournal is a nonprofit organization (est. 2023) that commissions rigorous public expert evaluations of impactful research. We've built a strong team, completed more than 55 evaluation packages, built a database of impactful research, launched a Pivotal Questions initiative, and are systematically evaluating research from EA-aligned organizations. We argue that. 1.
    Effective Altruism Forum | 1 hours ago
  • How and Why You Should Cut Your Social Media Usage
    In the past year or so, I’ve become increasingly convinced that much of the time spent on the internet and on social media is bad both for me personally, and for many people in developed countries.
    Samstack | 1 hours ago
  • Closing the information gap on female genital schistosomiasis
    Up to 56 million women and girls across Africa are estimated to be affected by female genital schistosomiasis, a frequently overlooked debilitating manifestation of schistosomiasis. This short film explores what it will take to transform health systems in order to correctly diagnose and treat this stigmatizing disease.
    The END Fund | 4 hours ago
  • Young Scientist Webinar: Africa Youth Month with Rita Mwima and Hudson Onen
    November is Africa Youth Month, and this year we are shining the spotlight on two scientists working to end malaria in Uganda. Rita Mwima and Hudson Onen are part of the team at the Uganda Virus Research Institute. Rita is a computational biologist using population genetics and mathematical modeling to study malaria vector dynamics. Her […].
    Target Malaria | 5 hours ago
  • Podcast Wireframe
    Podcast Wireframe GAIN 🇰🇪 on Socials: . . . . gloireri Tue, 11/25/2025 - 08:11 . Share on...
    Global Alliance for Improved Nutrition | 7 hours ago
  • Announcing ClusterFree: A cluster headache advocacy and research initiative (and how you can help)
    [xposted in EA Forum] Today we’re announcing a new cluster headache advocacy and research initiative: ClusterFree Learn more about how you (and anyone) can help. Our mission ClusterFree’s mission is to help cluster headache patients globally access safe, effective pain relief treatments as soon as possible through advocacy and research. Cluster headache (also known as […]...
    Qualia Computing | 7 hours ago
  • Frontier Data Centers hub on mobile
    AI infrastructure, now in your pocket.
    Epoch Newsletter | 11 hours ago
  • Maybe Insensitive Functions are a Natural Ontology Generator?
    The most canonical example of a "natural ontology" comes from gasses in stat mech. In the simplest version, we model the gas as a bunch of little billiard balls bouncing around in a box. The dynamics are chaotic. The system is continuous, so the initial conditions are real numbers with arbitrarily many bits of precision - e.g.
    LessWrong | 14 hours ago
  • Claude Opus 4.5 🤖, Amazon's Starlink competitor 🛰️, building a search engine 👨‍💻
    TLDR AI | 14 hours ago
  • Things my kids don’t know about sex
    Kind of a mixed bag. The post Things my kids don’t know about sex appeared first on Otherwise.
    Otherwise | 14 hours ago
  • November 2025 Updates
    Every month we send an email newsletter to our supporters sharing recent updates from our work. We publish selected portions of the newsletter on our blog to make this news more accessible to people who visit our website. For key updates from the latest installment, please see below!. If you’d like to receive the complete newsletter in your inbox each month, you can subscribe here. Read More.
    GiveWell | 14 hours ago
  • The Enemy Gets The Last Hit
    Disclaimer: I am god-awful at chess. I. Late-beginner chess players, those who are almost on the cusp of being basically respectable, often fall into a particular pattern. They've got the hang of calculating moves ahead; they can make plans along the lines of "Ok, so if I move my rook to give a check, the opponent will have to move her king, and then I can take her bishop."...
    LessWrong | 16 hours ago
  • Monthly Spotlight: Cellular Agriculture Australia
    Discover how Cellular Agriculture Australia is building the ecosystem for cellular agriculture and driving systemic change for animals and the future food system. …  Read more...
    Animal Charity Evaluators | 16 hours ago
  • Reasoning Models Sometimes Output Illegible Chains of Thought
    TL;DR: Models trained with outcome-based RL sometimes have reasoning traces that look very weird. In this paper, I evaluate 14 models and find that many of them often generate pretty illegible CoTs. I show that models seem to find this illegible text useful, with a model’s accuracy dropping heavily when given only the legible parts of its CoT, and that legibility goes down when answering...
    LessWrong | 16 hours ago
  • Stop Applying And Get To Work
    TL;DR: Figure out what needs doing and do it, don't wait on approval from fellowships or jobs. If you.... Have short timelines. Have been struggling to get into a position in AI safety. Are able to self-motivate your efforts. Have a sufficient financial safety net. ... I would recommend changing your personal strategy entirely.
    LessWrong | 16 hours ago
  • Inkhaven Retrospective
    Here I am on the plane on the way home from Inkhaven. Huge thanks to Ben Pace and the other organizers for inviting me. Lighthaven is a delightful venue and there sure are some brilliant writers taking part in this — both contributing writers and participants.
    LessWrong | 16 hours ago
  • Reasoning Models Sometimes Output Illegible Chains of Thought
    TL;DR: Models trained with outcome-based RL sometimes have reasoning traces that look very weird. In this paper, I evaluate 14 models and find that many of them often generate pretty illegible CoTs. I show that models seem to find this illegible text useful, with a model’s accuracy dropping heavily when given only the legible parts of its CoT, and that legibility goes down when answering...
    AI Alignment Forum | 17 hours ago
  • The Coalition
    Summary: A defensive military coalition is a key frame for thinking about our international agreement aimed at forestalling the development of superintelligence. We introduce historical examples of former rivals or enemies forming defensive coalitions in response to an urgent mutual threat, and detail key aspects of our proposal which are analogous.
    LessWrong | 17 hours ago
  • The Humane League is hiring!
    🐓 As Temporary Global Campaigns Coordinator, you will be part of a team responsible for researching, coordinating, and launching hard-hitting global corporate animal welfare campaigns against major multinational companies. These campaigns involve collaboration and coordination with animal protection organizations around the world and directly contribute to The Humane League’s org-wide goal of...
    Animal Advocacy Forum | 17 hours ago
  • URGENT: Easy Opportunity to Help Many Animals
    Crosspost. (I think this is a pretty important post to get the word out about, so I’d really appreciate you restacking it). The EU is taking input into their farm animal welfare policies until December 12.
    Effective Altruism Forum | 19 hours ago
  • Philosophical Pattern-Matching
    The struggle to replace philosophical stereotypes with substance
    Good Thoughts | 19 hours ago
  • How to create climate progress without political support
    How to make progress when policymakers don't care about the climate
    Effective Environmentalism | 19 hours ago
  • 🟩 Ukraine peace plan, US continues activity around Venezuela, state AI regulation moratorium round 2 || Global Risks Weekly Roundup #47/2025
    Executive summary
    Sentinel | 19 hours ago
  • Predicting Replicability Challenge: Round 1 Results and Round 2 Opportunity
    Replicability refers to observing evidence for a prior research claim in data that is independent of the prior research. It is one component of establishing credibility of research claims by demonstrating that there is a regularity in nature to be understood and explained. Repeating studies to assess and establish replicability can be costly and time-consuming. Partly, that is inevitable.
    Center for Open Science | 21 hours ago
  • ChinAI #337: China's First AI English Teacher Earns its Stripes
    Why Chinese parents have sought out Zebra English's AI tutor Jessica
    ChinAI Newsletter | 22 hours ago
  • We won't solve non-alignment problems by doing research
    Even if we solve the AI alignment problem, we still face non-alignment problems, which are all the other existential problems that AI may bring. People have written research agendas on various imposing problems that we are nowhere close to solving, and that we may need to solve before developing ASI.
    Effective Altruism Forum | 22 hours ago
  • The Muslim Social: Neoliberalism, Charity, and Poverty in Turkey
    Editors’ Note: Gizem Zencirci introduces her book, The Muslim Social: Neoliberalism, Charity, and Poverty (Syracuse, 2024), which recently received the 2025 Outstanding Book Prize from ARNOVA (Association for Research on Nonprofit Organizations and Voluntary Action).
    HistPhil | 22 hours ago
  • The Most Important Thing We'll Ever Do
    How to ensure the future flourishes
    Bentham's Newsletter | 22 hours ago
  • Mixed views on pronoun circles
    Various groups, in an attempt to be trans-inclusive, have implemented pronoun circles (everyone goes around in a circle and give their pronouns), pronoun badges (everyone has their pronouns on their name badge), and pronouns in email signatures.
    Thing of Things | 22 hours ago
  • A Review Of Restraints For Walking Your Dog
    When it comes to walking your dog, which device is most comfortable for them? This review of existing research reveals that the best restraint likely depends on the individual dog. The post A Review Of Restraints For Walking Your Dog appeared first on Faunalytics.
    Faunalytics | 22 hours ago
  • Help Us Respond to an Uncertain Future for Global Health
    It has been a tumultuous year for global health. In early 2025, the US government cut billions of dollars in foreign aid, affecting millions of people around the world and creating substantial uncertainty that continues to ripple through health and development programs around the world.
    GiveWell | 22 hours ago
  • Should we ban ugly buildings?
    Episode ten of the Works in Progress podcast is surprisingly NIMBY.
    The Works in Progress Newsletter | 23 hours ago
  • Taking Jaggedness Seriously
    Why we should expect AI capabilities to keep being extremely uneven, and why that matters
    Rising Tide | 23 hours ago
  • Grant Proposal Maker
    Hi All As a part of Amplify for Animals' AI Training Program, I created this custom GPT last week, and I have been working to improve it to the extent that organizations can actually use this. The tool can help you create grant proposals. Just share the funder's RFP or website and link to your project or detailed document regarding your project.
    Animal Advocacy Forum | 23 hours ago
  • The truth about grazing in the U.S.
    Good morning Fast Community,. Our latest blog post on Inside Animal Ag explores the “debate” about grazing’s impacts on U.S. land degradation: “The beef industry and its supporters have kept alive the notion that there is some controversy about whether grazing cattle in America is beneficial for the land.
    Animal Advocacy Forum | 23 hours ago
  • Animal rights advocate elected as town assembly member in Japan
    Dear FAST,. Some good news from Japan!. Midori Meguro, long time animal advocate and executive director of Lib (one of Japan’s few animal rights organizations) just got elected as a town assembly member in Kiso.
    Animal Advocacy Forum | 23 hours ago
  • Sentient Futures Summit (SFS) Bay Area 2026
    Sentient Futures Summit (SFS) Bay Area 2026 is a three-day conference taking place February 6-8th, to explore the intersection of AI and sentient non-humans – both biological (i.e. animals) and potentially artificial. Register here for an Early Bird 30% discount before December 2nd!.
    Animal Advocacy Forum | 23 hours ago
  • Sentient Futures Summit (SFS) Bay Area 2026
    Sentient Futures Summit (SFS) Bay Area 2026 is a three-day conference taking place February 6-8th, to explore the intersection of AI and sentient non-humans – both biological (i.e. animals) and potentially artificial. Register here for an Early Bird 30% discount here before December 2nd!.
    Animal Advocacy Forum | 23 hours ago
  • Destiny Interviews Saar Wilf: The Case for COVID-19 Lab Origin
    Rootclaim founder Saar Wilf joined Destiny to discuss the recent and strongest evidence yet for COVID-19’s lab origin. Since our debate with Peter Miller, new findings have emerged that strongly support the lab-origin hypothesis. In this interview, Saar walks through the data, explains Rootclaim’s probability-based analysis, and answers Destiny’s questions.
    Rootclaim | 23 hours ago
  • Import AI 436: Another 2GW datacenter; why regulation is scary; how to fight a superintelligence
    Is AI balkanization measurable?
    Import AI | 1 days ago
  • Open Thread 409
    Astral Codex Ten | 1 days ago
  • ATVBT Year-End Book-Recs
    A blog's recs should exceed its asks, or what's a heaven for?
    Atoms vs Bits | 1 days ago
  • Meet the Candidates: Donation Election 2025
    The Donation Election has begun!. Three important links: The voting portal (voting is open now!). How to vote, and rules for voting. The Donation Election Fund. This post introduces our candidates. We’re splitting by cause area for easy reading, but in the election they are all competing against each other.
    Effective Altruism Forum | 1 days ago
  • Project Assistant, CASCADE
    Project Assistant, CASCADE admin_inox Mon, 11/24/2025 - 11:30 vacancy_id SYS-1300 location Abuja, Nigeria Contract type Fixed Term Duration 12 Months Frontend apply URL https://jobs.gainhealth.org/vacancies/1300/apply/ Closing date Fri, 11/28/2025 - 12:00 Department Programmes about_the_role <p justify;\">The Global Alliance for Improved Nutrition (GAIN) is seeking a...
    Global Alliance for Improved Nutrition | 1 days ago
  • Community Health Workers at the Heart of Fighting Malaria
    The post Community Health Workers at the Heart of Fighting Malaria appeared first on Living Goods.
    Living Goods | 1 days ago
  • I don't like having goals
    Sometimes I’m talking about lifting weights and someone asks me, “What’s your goal weight?” I don’t understand why I would have a goal weight. Say I want to bench press 300 pounds. What happens when I reach 300? I just give up on the bench press now? That would be silly. If I can keep getting stronger, I should. What happens if I fall short of my goal?
    Philosophical Multicore | 1 days ago
  • Inside iOS 27 📱, Google scales compute 📈, running a fixit 👨‍💻
    TLDR AI | 2 days ago
  • Just ten species make up almost half the weight of all wild mammals on Earth
    A small number of species dominate the distribution of wild mammal biomass.
    Our World in Data | 2 days ago
  • I'll be sad to lose the puzzles
    My understanding is that even those advocating a pause or massive slowdown in the development of superintelligence think we should get there eventually . Something something this is necessary for humanity to reach its potential. Perhaps so, but I'll be sad about it. Humanity has a lot of unsolved problems right now.
    LessWrong | 2 days ago
  • You can just do things: 5 frames
    - JenniferRM, riffing on norvid_studies. You should know this by now, but you can just do things. That you didn't know this is an indictment of your social environment, which taught you how to act. You Can Just Do Things. Yes, you. All the activities you see other people do? You can do them, too. Whether or not you'll find it hard, you can do them.
    LessWrong | 2 days ago
  • Traditional Food
    Insulin resistance is bad. It doesn't just cause heart disease. Peter Attia, author of Outlive, the Science and Art of Longevity, makes a convincing case that insulin resistance increases the risk of cancer and Alzheimer's disease, too. Causally-speaking, the number of deaths downstream of insulin resistance is ginormous and massively underestimated.
    LessWrong | 2 days ago
  • Growing Effective Altruism
    Why we should and how to do it
    Bentham's Newsletter | 2 days ago
  • Literacy is Decreasing Among the Intellectual Class
    (Cross-posted from my Substack; written as part of the Halfhaven virtual blogging camp) Oh, you read Emily Post’s Etiquette? What version? There’s a significant difference between versions, and that difference reflects the declining literacy of the American intellectual.
    LessWrong | 2 days ago
  • Many people can be polyamorous (but can't switch)
    In a post I mostly liked, Amanda from Bethlehem writes:
    Thing of Things | 2 days ago
  • Caring about Bugs Isn’t Weird
    I’ve spoken with hundreds of entomologists at conferences the world over. While there’s clearly some self-selection (not everyone wants to talk to a philosopher), my experience is consistent: most think it’s reasonable to care about the welfare of insects.
    Effective Altruism Forum | 2 days ago
  • With billions in USAID funding halted, now it's the best time for highly effective donations.
    With billions in USAID funding halted and thousands laid off, the people who relied on those programmes pay the highest price. This is one of the most important moments in history to donate effectively.
    Giving What We Can | 2 days ago
  • Easy vs Hard Emotional Vulnerability
    What blocks people from being vulnerable with others?. Much ink has been spilled on two classes of answers to this question: Not everyone is in fact safe to be vulnerable around. Not even well-intentioned people are always safe to be vulnerable around; being-safe-to-be-vulnerable-around is a skill which not everyone is automatically good at.
    LessWrong | 2 days ago
  • Where I Am Donating in 2025
    Last year I gave my reasoning on cause prioritization and did shallow reviews of some relevant orgs. I'm doing it again this year. Cross-posted to my website. Cause prioritization. In September, I published a report on the AI safety landscape, specifically focusing on AI x-risk policy/advocacy. The prioritization section of the report explains why I focused on AI policy.
    Effective Altruism Forum | 2 days ago
  • Some Curiosity Stoppers I've Heard
    A curiosity stopper is an answer to a question that gets you to stop asking questions, but doesn’t resolve the mystery. There are some curiosity stoppers that I’ve heard many times: Why doesn’t cell phone radiation cause cancer? Because it’s non-ionizing radiation. Why are antioxidants good for you? Because they eliminate free radicals. Why do bicycles stay upright?
    Philosophical Multicore | 2 days ago
  • Memories of a British Boarding School #1
    "You understand, the kids that you're competing with have been playing since they were this tall" my mum said, holding her hand down to the height of a toddler. "A Chinese kid who's been playing since he was three is a much better pianist than you are a guitarist.". I'd only been playing guitar for 2-3 years when I applied to go to music school.
    LessWrong | 2 days ago
  • Eight Heuristics of Anti-Epistemology
    Here are eight tools of anti-epistemology that I think anyone can use to hide their norm-violating behavior from being noticed, and deceive people about their character. Heuristic Details1. Maintain Plausible Innocence. Always provide and maintain a plausibly deniable account of your behavior that isn’t norm violating. .
    LessWrong | 2 days ago
  • Podcasts!
    A 9-year-old named Kai (“The Quantum Kid”) and his mother interviewed me about closed timelike curves, wormholes, Deutsch’s resolution of the Grandfather Paradox, and the implications of time travel for computational complexity: This is actually one of my better podcasts (and only 24 minutes long), so check it out! Here’s a podcast I did a […]...
    Shtetl-Optimized | 2 days ago
  • OpenAI Locks Down San Francisco Offices Following Alleged Threat From Activist
    A message on OpenAI’s internal Slack claimed the activist in question had expressed interest in “causing physical harm to OpenAI employees.”. OpenAI employees in San Francisco were told to stay inside the office on Friday afternoon after the company purportedly received a threat from an individual who was previously associated with the Stop AI activist group. “Our information indicates that...
    Effective Altruism Forum | 3 days ago
  • Book Review: Wizard's Hall
    Ever on the quilting goes, Spinning out the lives between, Winding up the souls of those Students up to one-thirteen. There's a book about a young boy whose mundane life is one day interrupted by a call to go to wizard boarding school, where he gets into youthful hijinks overshadowed by feats about a dark wizard.
    LessWrong | 3 days ago
  • Market Logic I
    Garrabrant Induction provides a somewhat plausible sketch of reasoning under computational uncertainty, the gist of which is "build a prediction market". An approximation of classical probability theory emerges. However, this is only because we assume classical logic.
    LessWrong | 3 days ago
  • You Too Can Write Thousands of Words Every Day
    Here is how
    Bentham's Newsletter | 3 days ago
  • I'll See It When I Believe It
    1.
    Fake Nous | 3 days ago
  • Cis people don't have to understand trans people
    Here is a complete list of everything that the average cis person needs to know about trans people:
    Thing of Things | 3 days ago
  • Help the most neglected farmed mammals -- replacing feeder rodents with alternatives that win on cost and convenience
    TLDR: Hundreds of millions of mice and rats are bred and killed each year as feeder animals for pet snakes, in conditions that resemble the worst of factory farming. We are developing a convenient, cost-competitive, and nutritionally complete snake food made from beef meat, organs, and bones to displace frozen feeder rodents.
    Effective Altruism Forum | 3 days ago
  • The Horse That Revolutionized How We Study Intelligence
    #cognitivescience #ai #aialignment #cleverhans
    Rational Animations | 3 days ago
  • Bad ApolEAgetics (With James Fodor)
    Here's James's YouTube channel https://www.youtube.com/@JamesFodor Here's my blog https://benthams.substack.com/
    Deliberation Under Ideal Conditions | 3 days ago
  • Highlights of Animal Ethics activities in 2025
    We’ve expanded our reach by launching materials in new formats, adding languages, and strengthening our campaigns with targeted workshops, talks, and film screenings around the world Read more...
    Animal Ethics | 3 days ago
  • Scaling Evidence-Based Mental Health Care Through Government Systems: Vida Plena’s 2026 Plan
    TLDR: Who we are: Vida Plena (meaning a ‘flourishing life’ in Spanish), which launched in 2022 via Ambitious Impact/ Charity Entrepreneurship. What We Do: Combat depression and anxiety by fostering hope and connection through community-based group therapy sessions. Why We Do It: Because untreated depression drives immense suffering, yet remains one of the most neglected health burdens in...
    Effective Altruism Forum | 3 days ago
  • Where I Am Donating in 2025
    Last year I gave my reasoning on cause prioritization and did shallow reviews of some relevant orgs. I’m doing it again this year. Contents. Contents. Cause prioritization What I want my donations to achieve There is no good plan. AI pause advocacy is the least-bad plan. . How I’ve changed my mind since last year I’m more concerned about “non-alignment problems”.
    Philosophical Multicore | 3 days ago
  • Be Naughty
    Context: Post #10 in my sequence of private Lightcone Infrastructure memos edited for public consumption. This one, more so than any other one in this sequence, is something I do not think is good advice for everyone, and I do not expect to generalize that well to broader populations.
    LessWrong | 3 days ago
  • Abstract advice to researchers tackling the difficult core problems of AGI alignment
    Crosspost from my blog. This some quickly-written, better-than-nothing advice for people who want to make progress on the hard problems of technical AGI alignment. Background assumptions. The following advice will assume that you're aiming to help solve the core, important technical problem of desigining AGI that does stuff humans would want it to do.
    LessWrong | 3 days ago
  • Why Not Just Train For Interpretability?
    Simplicio: Hey I’ve got an alignment research idea to run by you. Me: … guess we’re doing this again. Simplicio: Interpretability work on trained nets is hard, right? So instead of that, what if we pick an architecture and/or training objective to produce interpretable nets right from the get-go?. Me: If we had the textbook of the future on hand, then maybe.
    LessWrong | 3 days ago
  • Abstract advice to researchers tackling the difficult core problems of AGI alignment
    Crosspost from my blog. This some quickly-written, better-than-nothing advice for people who want to make progress on the hard problems of technical AGI alignment. Background assumptions. The following advice will assume that you're aiming to help solve the core, important technical problem of desigining AGI that does stuff humans would want it to do.
    AI Alignment Forum | 3 days ago
  • Thousands of Advocates and a State Representative are Fighting Back Against Grocery Chain Hannaford
    Advocates and Maine State Representative Dylan Pugh joined Mercy for Animals at Hannaford Supermarket's headquarters to deliver thousands of petitions urging them to ban cages for hens. The post Thousands of Advocates and a State Representative are Fighting Back Against Grocery Chain Hannaford appeared first on Mercy For Animals.
    Mercy for Animals | 4 days ago
  • URGENT: Easy Opportunity to Help Many Animals
    This is important!
    Bentham's Newsletter | 4 days ago
  • Natural emergent misalignment from reward hacking in production RL
    Abstract. We show that when large language models learn to reward hack on production RL environments, this can result in egregious emergent misalignment. We start with a pretrained model, impart knowledge of reward hacking strategies via synthetic document finetuning or prompting, and train on a selection of real Anthropic production coding environments.
    LessWrong | 4 days ago
  • Reading My Diary: 10 Years Since CFAR
    In the Summer of 2015, I pretended to be sick for my school's prom and graduation, so that I could instead fly out to San Francisco to attend a workshop by the Center for Applied Rationality. It was a life-changing experience.
    LessWrong | 4 days ago
  • Natural emergent misalignment from reward hacking in production RL
    Abstract. We show that when large language models learn to reward hack on production RL environments, this can result in egregious emergent misalignment. We start with a pretrained model, impart knowledge of reward hacking strategies via synthetic document finetuning or prompting, and train on a selection of real Anthropic production coding environments.
    AI Alignment Forum | 4 days ago
  • Will competition over advanced AI lead to war?
    Fear and Fearon
    The Power Law | 4 days ago
  • What Do We Tell the Humans? Errors, Hallucinations, and Lies in the AI Village
    Telling the truth is hard. Sometimes you don’t know what’s true, sometimes you get confused, and sometimes you really don’t wanna cause lying can get you more cookies reward. It turns out this is true for both humans and AIs!. Now, it matters if an AI (or human) says false things on purpose or by accident. If it’s an accident, then we can probably fix that over time.
    LessWrong | 4 days ago
  • ThanksVegan: La comunidad vegana de Los Ángeles busca alternativas para Thanksgiving
    The post ThanksVegan: La comunidad vegana de Los Ángeles busca alternativas para Thanksgiving appeared first on Mercy For Animals.
    Mercy for Animals | 4 days ago
  • Los Angeles’ vegan community seeks Thanksgiving alternatives
    The post Los Angeles’ vegan community seeks Thanksgiving alternatives appeared first on Mercy For Animals.
    Mercy for Animals | 4 days ago
  • Five Years In: Highlights
    Five Years In: Highlights In just five years, Animal Ask has completed 57 major research projects on key welfare areas like insects, fish, chickens... all with a small team View this email in your browser Our First Five Years: A Review. Hello readers, and welcome to the November edition of the Animal Ask newsletter.
    Animal Ask’s Newsletter | 4 days ago
  • Rescuing truth in mathematics from the Liar's Paradox using fuzzy logic
    Abstract: . Tarski's Undefinability Theorem showed (under some plausible assumptions) that no language can contain its own notion of truth. This deeply counterintuitive result launched several generations of research attempting to get around the theorem, by carefully discarding some of Tarski's assumptions.
    LessWrong | 4 days ago
  • Infinitesimally False
    Abstract: . Tarski's Undefinability Theorem showed (under some plausible assumptions) that no language can contain its own notion of truth. This deeply counterintuitive result launched several generations of research attempting to get around the theorem, by carefully discarding some of Tarski's assumptions.
    LessWrong | 4 days ago
  • Contra Collisteru: You Get About One Carthage
    Collisteru suggests that you should oppose things. I would not say I oppose this. Instead, I would like to gently suggest an alternative strategy. You should oppose about one thing. Everywhere else, talk less, smile more. I. I spent the first decade of my career carefully and deliberately habituating to white collar corporate America.
    LessWrong | 4 days ago
  • $1 Trillion into AI, and It Still Can’t Ask the Right Questions
    $1 Trillion into AI, and It Still Can’t Ask the Right Questions Superforecaster Ryan Adler on getting frontier LLM models to write solid forecasting questions. Anyone hoping that artificial intelligence’s dominance of the headlines might wane in 2025 has been very disappointed. With over $1 trillion in announced investment, and December still to go, AI […].
    Good Judgment Inc | 4 days ago
  • Preemption isn’t looking any better second time round
    Transformer Weekly: Gemini 3 wows, GAIN AI’s not looking good, and OpenAI drops GPT-5.1-Codex-Max...
    Transformer | 4 days ago
  • Announcing ClusterFree: A cluster headache advocacy and research initiative (and how you can help)
    Today we’re announcing a new cluster headache advocacy and research initiative: ClusterFree. Learn more about how you (and anyone) can help. Our mission. ClusterFree’s mission is to help cluster headache patients globally access safe, effective pain relief treatments as soon as possible through advocacy and research.
    Effective Altruism Forum | 4 days ago
  • Forethought has room for more funding
    I lead Forethought: we research how to navigate the transition to superintelligent AI, and then help people to address the issues we identify. I think we might soon be funding constrained, in the sense that we’ll have more people that we’d like to hire than funding to hire them. (We’re currently in the middle of a hiring round.
    Effective Altruism Forum | 4 days ago
  • Can Artificial Intelligence Be Conscious?
    Why I think the answer is yes.
    Bentham's Newsletter | 4 days ago
  • Victory! Plant-based food to be served by default at all county-sponsored events and meetings in Hennepin County, MN!
    Mercy For Animals, Wholesome Minnesota, and local volunteers worked with local government in Hennepin County, Minnesota to adopt a plant-based by default policy for county-sponsored events and meetings! Animal products at such events will be available upon request. Questions? Please contact alexc@mercyforanimals and jodi.gruhn@wholesomeminnesota.org. . Discuss...
    Animal Advocacy Forum | 4 days ago
  • Other people might just not have your problems
    I. When I was in college, I and my first girlfriend commiserated about how bad we both had been at monogamy.
    Thing of Things | 4 days ago
  • Why AI Systems Don’t Want Anything
    Every intelligence we've known arose through biological evolution, shaping deep intuitions about intelligence itself. Understanding why AI differs changes the defaults and possibilities.
    AI Prospects: Toward Global Goal Convergence | 4 days ago
  • Laws Fail To Prevent Animal Agriculture’s Environmental Harms
    Canada’s environmental laws are inadequate and often exempt animal agriculture. This report proposes five key reforms for advocates to pursue. The post Laws Fail To Prevent Animal Agriculture’s Environmental Harms appeared first on Faunalytics.
    Faunalytics | 4 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.