Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Why the West was downzoned
    In the space of a few decades, nearly every city in the Western world banned densification. What happened?
    The Works in Progress Newsletter | 1 hours ago
  • 2025 Results, 2026 Plans and Funding Needs
    We have just released our end-of-year 2025 Results, 2026 Plans, and Funding Needs document that demonstrates what a donation to RP can accomplish: from contributing to better allocation of millions of…...
    Rethink Priorities | 2 hours ago
  • Is This Anything? 22
    Recently I've been getting into Gratitude Patrols. People talk about gratitude journals as being one of the few provably impactful psychological interventions [citation needed], but I think there's a benefit to physically embodying it by walking around your space each morning and thanking things. If you.
    Atoms vs Bits | 2 hours ago
  • Nutrition Investing: Moving from Awareness to Action
    Nutrition Investing: Moving from Awareness to Action gloireri Wed, 11/26/2025 - 14:04 . Investing in nutrition isn’t just possible, it’s smart. That’s the key message that sticks with us a few weeks after the GIIN Impact Forum 2025, where we organised a session, “Nutrition Lens Investing: A Framework for Action”.
    Global Alliance for Improved Nutrition | 2 hours ago
  • 3 doubts about veganism
    I keep thinking about what kind of identity would be useful for building a powerful animal advocacy movement. Here are 3 features of veganism that I often think about which make me doubt its usefulness. Too maximalist. The official definition of veganism by the inventors of the term is the following:
    Effective Altruism Forum | 3 hours ago
  • Understanding Moral Disagreement Through Data
    Short of time? Click here to read the key takeaways! We ran a study to better understand the moral judgments of people in the US, and we found striking disagreement. When participants rated the immorality of scenarios, a substantial share of them routinely gave maximally opposite judgments (0.0 vs. 5.0).
    Clearer Thinking | 4 hours ago
  • Nuevo ranking de «Top invitadores» | ¿Qué regalar estas Navidades?
    Ayuda Efectiva | 5 hours ago
  • EA Forum Digest #268
    EA Forum Digest #268 Hello!. Three things to note: The Donation Election is now open! You can go vote on the Forum (if you had an account before October 24th). Read about all the candidates here (if you spot an error in this post, I have to donate $10), and read the rules here.
    EA Forum Digest | 5 hours ago
  • Vage Versprechen schützen keine Tiere – Transparenz schon
    Sentience Politics | 5 hours ago
  • Takeaways from the Eleos Conference on AI Consciousness and Welfare
    Crossposted from my Substack... . I spent the weekend at Lighthaven, attending the Eleos conference. In this post, I share thoughts and updates as I reflect on talks, papers, and discussions, and put out some of my takes since I haven't written about this topic before.
    LessWrong | 6 hours ago
  • The Eugenics Debate: A postmortem
    On Suffering, Sterilization, and Why We Somehow Spent So Much Time Talking About Incest
    Dissentient | 9 hours ago
  • Epoch AI webinar: Inside the Frontier Data Centers hub
    Showcasing the new Epoch AI Frontier Data Centers hub.
    Epoch Newsletter | 10 hours ago
  • 7-Eleven reports cage-free progress for first time (US/CAN)
    7-Eleven, the biggest convenience store in the US (and the world) has reported its cage-free progress for the first time ever: We’re committed to working with suppliers toward a goal of sourcing 100 percent cage-free eggs for all U.S. and Canada stores by 2025, based on available supply.
    Animal Advocacy Forum | 13 hours ago
  • Why AI Safety Won't Make America Lose The Race With China
    Astral Codex Ten | 14 hours ago
  • OpenAI finetuning metrics: What is going on with the loss curves?
    Introduction. For our current project, we've been using the OpenAI fine-tuning API. To run some of our experiments, we needed to understand exactly how the reported metrics (loss and accuracy) are calculated. Unfortunately, the official documentation is sparse, and the most detailed explanation we could find was the following table from Microsoft's Azure documentation:
    LessWrong | 14 hours ago
  • Nuevo ranking de «Top invitadores»
    Ayuda Efectiva | 16 hours ago
  • Elon's reusable tunnelers 🚇, Alibaba's AI app 📱, Gemini 3 API features 👨‍💻
    TLDR AI | 16 hours ago
  • How to Talk to Journalists
    TLDR: Just pick up the phone. Agree on permissions for information before you share it (see Step 4). I have worked as a professional journalist covering AI for over a year, and during that time, multiple people working in AI safety have asked me for advice on engaging with journalists. At this point, I've converged on some core lessons, so I figured I should share them more widely.
    Effective Altruism Forum | 18 hours ago
  • Alignment will happen by default. What’s next?
    I’m not 100% convinced of this, but I’m fairly convinced, more and more so over time. I’m hoping to start a vigorous but civilized debate. I invite you to attack my weak points and/or present counter-evidence. My thesis is that intent-alignment is basically happening, based on evidence from the alignment research in the LLM era. Introduction.
    LessWrong | 19 hours ago
  • Who should direct social spending?
    Individuals, Corporations, or Governments?
    Good Thoughts | 20 hours ago
  • Alignment will happen by default. What’s next?
    I’m not 100% convinced of this, but I’m fairly convinced, more and more so over time. I’m hoping to start a vigorous but civilized debate. I invite you to attack my weak points and/or present counter-evidence. My thesis is that intent-alignment is basically happening, based on evidence from the alignment research in the LLM era. Introduction.
    AI Alignment Forum | 21 hours ago
  • Ilya Sutskever – We're moving from the age of scaling to the age of research
    “These models somehow just generalize dramatically worse than people. It's a very fundamental thing.”...
    The Lunar Society | 23 hours ago
  • Rob & Luisa chat kids, the fertility crash, and how the ‘50s invented parenting that makes us miserable
    The post Rob & Luisa chat kids, the fertility crash, and how the ‘50s invented parenting that makes us miserable appeared first on 80,000 Hours.
    80,000 Hours | 23 hours ago
  • Eating the Ocean
    Vox’s Future Perfect takes pride in being one of the top destinations in media for consistent, rigorous, ethically clear-eyed coverage of one of the biggest stories of our time — the mass production of billions of animals for food on factory farms. But there’s arguably an even bigger, even more neglected story hidden within that […]...
    Future Perfect | 23 hours ago
  • Christianity Is Very Morally Revisionary
    On giant torture pits
    Bentham's Newsletter | 23 hours ago
  • How Retailers Can Boost Plant-Based Food Sales
    Retail strategies to boost plant-based food sales work differently depending on location. The post How Retailers Can Boost Plant-Based Food Sales appeared first on Faunalytics.
    Faunalytics | 24 hours ago
  • Issue 21: The Great Downzoning
    Plus: Why cities in poor countries need wider streets, how to measure competition, and the South Korean baby bust.
    The Works in Progress Newsletter | 24 hours ago
  • November 2025 North America Newsletter
    November 2025 North America Newsletter J-PAL North America's November newsletter features a new blog post on interpreting results from the Baby's First Years study, a feature on our 2025 Research Staff Training, and our recent Evidence Matters Convening in Seattle, WA. kchristie@pove… Tue, 11/25/2025 - 10:35...
    J-PAL | 24 hours ago
  • Hiring your favorite Substacker: a guide
    Sometimes people want to hire me to write or edit things for them.
    Thing of Things | 1 days ago
  • The Unjournal: Bridging the Rigor/Impact Gaps for EA-relevant Research Questions
    Overview. The Unjournal is a nonprofit organization (est. 2023) that commissions rigorous public expert evaluations of impactful research. We've built a strong team, completed more than 55 evaluation packages, built a database of impactful research, launched a Pivotal Questions initiative, and are systematically evaluating research from EA-aligned organizations. We argue that. 1.
    Effective Altruism Forum | 1 days ago
  • How and Why You Should Cut Your Social Media Usage
    In the past year or so, I’ve become increasingly convinced that much of the time spent on the internet and on social media is bad both for me personally, and for many people in developed countries.
    Samstack | 1 days ago
  • Community Health at the Centre of Africa’s Health Sovereignty: Insights from the Africa Health Summit
    The post Community Health at the Centre of Africa’s Health Sovereignty: Insights from the Africa Health Summit appeared first on Living Goods.
    Living Goods | 1 days ago
  • Closing the information gap on female genital schistosomiasis
    Up to 56 million women and girls across Africa are estimated to be affected by female genital schistosomiasis, a frequently overlooked debilitating manifestation of schistosomiasis. This short film explores what it will take to transform health systems in order to correctly diagnose and treat this stigmatizing disease.
    The END Fund | 1 days ago
  • Young Scientist Webinar: Africa Youth Month with Rita Mwima and Hudson Onen
    November is Africa Youth Month, and this year we are shining the spotlight on two scientists working to end malaria in Uganda. Rita Mwima and Hudson Onen are part of the team at the Uganda Virus Research Institute. Rita is a computational biologist using population genetics and mathematical modeling to study malaria vector dynamics. Her […].
    Target Malaria | 1 days ago
  • How to Fix Quidditch
    Inspired by this post by Tomás Bjartur, which is an allegory; but I’m not writing an allegory, I’m writing about the rules of Quidditch. The rules of Quidditch have a big problem. The game ends when a seeker catches the snitch, and the snitch is worth 150 points. So most of the players on the field don’t matter; in almost all games, the only thing that matters is who catches the snitch.
    Philosophical Multicore | 1 days ago
  • Podcast Wireframe
    Podcast Wireframe GAIN 🇰🇪 on Socials: . . . . gloireri Tue, 11/25/2025 - 08:11 . Share on...
    Global Alliance for Improved Nutrition | 1 days ago
  • Announcing ClusterFree: A cluster headache advocacy and research initiative (and how you can help)
    [xposted in EA Forum] Today we’re announcing a new cluster headache advocacy and research initiative: ClusterFree Learn more about how you (and anyone) can help. Our mission ClusterFree’s mission is to help cluster headache patients globally access safe, effective pain relief treatments as soon as possible through advocacy and research. Cluster headache (also known as […]...
    Qualia Computing | 1 days ago
  • Frontier Data Centers hub on mobile
    AI infrastructure, now in your pocket.
    Epoch Newsletter | 2 days ago
  • Maybe Insensitive Functions are a Natural Ontology Generator?
    The most canonical example of a "natural ontology" comes from gasses in stat mech. In the simplest version, we model the gas as a bunch of little billiard balls bouncing around in a box. The dynamics are chaotic. The system is continuous, so the initial conditions are real numbers with arbitrarily many bits of precision - e.g.
    LessWrong | 2 days ago
  • Claude Opus 4.5 🤖, Amazon's Starlink competitor 🛰️, building a search engine 👨‍💻
    TLDR AI | 2 days ago
  • Plinko PIR tutorial
    Vitalik Buterin | 2 days ago
  • Things my kids don’t know about sex
    Kind of a mixed bag. The post Things my kids don’t know about sex appeared first on Otherwise.
    Otherwise | 2 days ago
  • November 2025 Updates
    Every month we send an email newsletter to our supporters sharing recent updates from our work. We publish selected portions of the newsletter on our blog to make this news more accessible to people who visit our website. For key updates from the latest installment, please see below!. If you’d like to receive the complete newsletter in your inbox each month, you can subscribe here. Read More.
    GiveWell | 2 days ago
  • The Enemy Gets The Last Hit
    Disclaimer: I am god-awful at chess. I. Late-beginner chess players, those who are almost on the cusp of being basically respectable, often fall into a particular pattern. They've got the hang of calculating moves ahead; they can make plans along the lines of "Ok, so if I move my rook to give a check, the opponent will have to move her king, and then I can take her bishop."...
    LessWrong | 2 days ago
  • Monthly Spotlight: Cellular Agriculture Australia
    Discover how Cellular Agriculture Australia is building the ecosystem for cellular agriculture and driving systemic change for animals and the future food system. …  Read more...
    Animal Charity Evaluators | 2 days ago
  • Reasoning Models Sometimes Output Illegible Chains of Thought
    TL;DR: Models trained with outcome-based RL sometimes have reasoning traces that look very weird. In this paper, I evaluate 14 models and find that many of them often generate pretty illegible CoTs. I show that models seem to find this illegible text useful, with a model’s accuracy dropping heavily when given only the legible parts of its CoT, and that legibility goes down when answering...
    LessWrong | 2 days ago
  • Stop Applying And Get To Work
    TL;DR: Figure out what needs doing and do it, don't wait on approval from fellowships or jobs. If you.... Have short timelines. Have been struggling to get into a position in AI safety. Are able to self-motivate your efforts. Have a sufficient financial safety net. ... I would recommend changing your personal strategy entirely.
    LessWrong | 2 days ago
  • Inkhaven Retrospective
    Here I am on the plane on the way home from Inkhaven. Huge thanks to Ben Pace and the other organizers for inviting me. Lighthaven is a delightful venue and there sure are some brilliant writers taking part in this — both contributing writers and participants.
    LessWrong | 2 days ago
  • Reasoning Models Sometimes Output Illegible Chains of Thought
    TL;DR: Models trained with outcome-based RL sometimes have reasoning traces that look very weird. In this paper, I evaluate 14 models and find that many of them often generate pretty illegible CoTs. I show that models seem to find this illegible text useful, with a model’s accuracy dropping heavily when given only the legible parts of its CoT, and that legibility goes down when answering...
    AI Alignment Forum | 2 days ago
  • The Coalition
    Summary: A defensive military coalition is a key frame for thinking about our international agreement aimed at forestalling the development of superintelligence. We introduce historical examples of former rivals or enemies forming defensive coalitions in response to an urgent mutual threat, and detail key aspects of our proposal which are analogous.
    LessWrong | 2 days ago
  • The Humane League is hiring!
    🐓 As Temporary Global Campaigns Coordinator, you will be part of a team responsible for researching, coordinating, and launching hard-hitting global corporate animal welfare campaigns against major multinational companies. These campaigns involve collaboration and coordination with animal protection organizations around the world and directly contribute to The Humane League’s org-wide goal of...
    Animal Advocacy Forum | 2 days ago
  • URGENT: Easy Opportunity to Help Many Animals
    Crosspost. (I think this is a pretty important post to get the word out about, so I’d really appreciate you restacking it). The EU is taking input into their farm animal welfare policies until December 12.
    Effective Altruism Forum | 2 days ago
  • Philosophical Pattern-Matching
    The struggle to replace philosophical stereotypes with substance
    Good Thoughts | 2 days ago
  • How to create climate progress without political support
    How to make progress when policymakers don't care about the climate
    Effective Environmentalism | 2 days ago
  • 🟩 Ukraine peace plan, US continues activity around Venezuela, state AI regulation moratorium round 2 || Global Risks Weekly Roundup #47/2025
    Executive summary
    Sentinel | 2 days ago
  • Predicting Replicability Challenge: Round 1 Results and Round 2 Opportunity
    Replicability refers to observing evidence for a prior research claim in data that is independent of the prior research. It is one component of establishing credibility of research claims by demonstrating that there is a regularity in nature to be understood and explained. Repeating studies to assess and establish replicability can be costly and time-consuming. Partly, that is inevitable.
    Center for Open Science | 2 days ago
  • ChinAI #337: China's First AI English Teacher Earns its Stripes
    Why Chinese parents have sought out Zebra English's AI tutor Jessica
    ChinAI Newsletter | 2 days ago
  • We won't solve non-alignment problems by doing research
    Even if we solve the AI alignment problem, we still face non-alignment problems, which are all the other existential problems that AI may bring. People have written research agendas on various imposing problems that we are nowhere close to solving, and that we may need to solve before developing ASI.
    Effective Altruism Forum | 2 days ago
  • The Muslim Social: Neoliberalism, Charity, and Poverty in Turkey
    Editors’ Note: Gizem Zencirci introduces her book, The Muslim Social: Neoliberalism, Charity, and Poverty (Syracuse, 2024), which recently received the 2025 Outstanding Book Prize from ARNOVA (Association for Research on Nonprofit Organizations and Voluntary Action).
    HistPhil | 2 days ago
  • The Most Important Thing We'll Ever Do
    How to ensure the future flourishes
    Bentham's Newsletter | 2 days ago
  • Mixed views on pronoun circles
    Various groups, in an attempt to be trans-inclusive, have implemented pronoun circles (everyone goes around in a circle and give their pronouns), pronoun badges (everyone has their pronouns on their name badge), and pronouns in email signatures.
    Thing of Things | 2 days ago
  • A Review Of Restraints For Walking Your Dog
    When it comes to walking your dog, which device is most comfortable for them? This review of existing research reveals that the best restraint likely depends on the individual dog. The post A Review Of Restraints For Walking Your Dog appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Help Us Respond to an Uncertain Future for Global Health
    It has been a tumultuous year for global health. In early 2025, the US government cut billions of dollars in foreign aid, affecting millions of people around the world and creating substantial uncertainty that continues to ripple through health and development programs around the world.
    GiveWell | 2 days ago
  • Should we ban ugly buildings?
    Episode ten of the Works in Progress podcast is surprisingly NIMBY.
    The Works in Progress Newsletter | 2 days ago
  • Taking Jaggedness Seriously
    Why we should expect AI capabilities to keep being extremely uneven, and why that matters
    Rising Tide | 2 days ago
  • Grant Proposal Maker
    Hi All As a part of Amplify for Animals' AI Training Program, I created this custom GPT last week, and I have been working to improve it to the extent that organizations can actually use this. The tool can help you create grant proposals. Just share the funder's RFP or website and link to your project or detailed document regarding your project.
    Animal Advocacy Forum | 2 days ago
  • The truth about grazing in the U.S.
    Good morning Fast Community,. Our latest blog post on Inside Animal Ag explores the “debate” about grazing’s impacts on U.S. land degradation: “The beef industry and its supporters have kept alive the notion that there is some controversy about whether grazing cattle in America is beneficial for the land.
    Animal Advocacy Forum | 2 days ago
  • Animal rights advocate elected as town assembly member in Japan
    Dear FAST,. Some good news from Japan!. Midori Meguro, long time animal advocate and executive director of Lib (one of Japan’s few animal rights organizations) just got elected as a town assembly member in Kiso.
    Animal Advocacy Forum | 2 days ago
  • Sentient Futures Summit (SFS) Bay Area 2026
    Sentient Futures Summit (SFS) Bay Area 2026 is a three-day conference taking place February 6-8th, to explore the intersection of AI and sentient non-humans – both biological (i.e. animals) and potentially artificial. Register here for an Early Bird 30% discount before December 2nd!.
    Animal Advocacy Forum | 2 days ago
  • Sentient Futures Summit (SFS) Bay Area 2026
    Sentient Futures Summit (SFS) Bay Area 2026 is a three-day conference taking place February 6-8th, to explore the intersection of AI and sentient non-humans – both biological (i.e. animals) and potentially artificial. Register here for an Early Bird 30% discount here before December 2nd!.
    Animal Advocacy Forum | 2 days ago
  • Destiny Interviews Saar Wilf: The Case for COVID-19 Lab Origin
    Rootclaim founder Saar Wilf joined Destiny to discuss the recent and strongest evidence yet for COVID-19’s lab origin. Since our debate with Peter Miller, new findings have emerged that strongly support the lab-origin hypothesis. In this interview, Saar walks through the data, explains Rootclaim’s probability-based analysis, and answers Destiny’s questions.
    Rootclaim | 2 days ago
  • Import AI 436: Another 2GW datacenter; why regulation is scary; how to fight a superintelligence
    Is AI balkanization measurable?
    Import AI | 2 days ago
  • Open Thread 409
    Astral Codex Ten | 2 days ago
  • ATVBT Year-End Book-Recs
    A blog's recs should exceed its asks, or what's a heaven for?
    Atoms vs Bits | 2 days ago
  • Meet the Candidates: Donation Election 2025
    The Donation Election has begun!. Three important links: The voting portal (voting is open now!). How to vote, and rules for voting. The Donation Election Fund. This post introduces our candidates. We’re splitting by cause area for easy reading, but in the election they are all competing against each other.
    Effective Altruism Forum | 2 days ago
  • Project Assistant, CASCADE
    Project Assistant, CASCADE admin_inox Mon, 11/24/2025 - 11:30 vacancy_id SYS-1300 location Abuja, Nigeria Contract type Fixed Term Duration 12 Months Frontend apply URL https://jobs.gainhealth.org/vacancies/1300/apply/ Closing date Fri, 11/28/2025 - 12:00 Department Programmes about_the_role <p justify;\">The Global Alliance for Improved Nutrition (GAIN) is seeking a...
    Global Alliance for Improved Nutrition | 2 days ago
  • Community Health Workers at the Heart of Fighting Malaria
    The post Community Health Workers at the Heart of Fighting Malaria appeared first on Living Goods.
    Living Goods | 2 days ago
  • I don't like having goals
    Sometimes I’m talking about lifting weights and someone asks me, “What’s your goal weight?” I don’t understand why I would have a goal weight. Say I want to bench press 300 pounds. What happens when I reach 300? I just give up on the bench press now? That would be silly. If I can keep getting stronger, I should. What happens if I fall short of my goal?
    Philosophical Multicore | 2 days ago
  • Just ten species make up almost half the weight of all wild mammals on Earth
    A small number of species dominate the distribution of wild mammal biomass.
    Our World in Data | 3 days ago
  • Inside iOS 27 📱, Google scales compute 📈, running a fixit 👨‍💻
    TLDR AI | 3 days ago
  • I'll be sad to lose the puzzles
    My understanding is that even those advocating a pause or massive slowdown in the development of superintelligence think we should get there eventually . Something something this is necessary for humanity to reach its potential. Perhaps so, but I'll be sad about it. Humanity has a lot of unsolved problems right now.
    LessWrong | 3 days ago
  • You can just do things: 5 frames
    - JenniferRM, riffing on norvid_studies. You should know this by now, but you can just do things. That you didn't know this is an indictment of your social environment, which taught you how to act. You Can Just Do Things. Yes, you. All the activities you see other people do? You can do them, too. Whether or not you'll find it hard, you can do them.
    LessWrong | 3 days ago
  • Traditional Food
    Insulin resistance is bad. It doesn't just cause heart disease. Peter Attia, author of Outlive, the Science and Art of Longevity, makes a convincing case that insulin resistance increases the risk of cancer and Alzheimer's disease, too. Causally-speaking, the number of deaths downstream of insulin resistance is ginormous and massively underestimated.
    LessWrong | 3 days ago
  • Growing Effective Altruism
    Why we should and how to do it
    Bentham's Newsletter | 3 days ago
  • Literacy is Decreasing Among the Intellectual Class
    (Cross-posted from my Substack; written as part of the Halfhaven virtual blogging camp) Oh, you read Emily Post’s Etiquette? What version? There’s a significant difference between versions, and that difference reflects the declining literacy of the American intellectual.
    LessWrong | 3 days ago
  • Many people can be polyamorous (but can't switch)
    In a post I mostly liked, Amanda from Bethlehem writes:
    Thing of Things | 3 days ago
  • Caring about Bugs Isn’t Weird
    I’ve spoken with hundreds of entomologists at conferences the world over. While there’s clearly some self-selection (not everyone wants to talk to a philosopher), my experience is consistent: most think it’s reasonable to care about the welfare of insects.
    Effective Altruism Forum | 3 days ago
  • With billions in USAID funding halted, now it's the best time for highly effective donations.
    With billions in USAID funding halted and thousands laid off, the people who relied on those programmes pay the highest price. This is one of the most important moments in history to donate effectively.
    Giving What We Can | 3 days ago
  • Easy vs Hard Emotional Vulnerability
    What blocks people from being vulnerable with others?. Much ink has been spilled on two classes of answers to this question: Not everyone is in fact safe to be vulnerable around. Not even well-intentioned people are always safe to be vulnerable around; being-safe-to-be-vulnerable-around is a skill which not everyone is automatically good at.
    LessWrong | 3 days ago
  • Where I Am Donating in 2025
    Last year I gave my reasoning on cause prioritization and did shallow reviews of some relevant orgs. I'm doing it again this year. Cross-posted to my website. Cause prioritization. In September, I published a report on the AI safety landscape, specifically focusing on AI x-risk policy/advocacy. The prioritization section of the report explains why I focused on AI policy.
    Effective Altruism Forum | 3 days ago
  • Some Curiosity Stoppers I've Heard
    A curiosity stopper is an answer to a question that gets you to stop asking questions, but doesn’t resolve the mystery. There are some curiosity stoppers that I’ve heard many times: Why doesn’t cell phone radiation cause cancer? Because it’s non-ionizing radiation. Why are antioxidants good for you? Because they eliminate free radicals. Why do bicycles stay upright?
    Philosophical Multicore | 3 days ago
  • Memories of a British Boarding School #1
    "You understand, the kids that you're competing with have been playing since they were this tall" my mum said, holding her hand down to the height of a toddler. "A Chinese kid who's been playing since he was three is a much better pianist than you are a guitarist.". I'd only been playing guitar for 2-3 years when I applied to go to music school.
    LessWrong | 3 days ago
  • Eight Heuristics of Anti-Epistemology
    Here are eight tools of anti-epistemology that I think anyone can use to hide their norm-violating behavior from being noticed, and deceive people about their character. Heuristic Details1. Maintain Plausible Innocence. Always provide and maintain a plausibly deniable account of your behavior that isn’t norm violating. .
    LessWrong | 3 days ago
  • Podcasts!
    A 9-year-old named Kai (“The Quantum Kid”) and his mother interviewed me about closed timelike curves, wormholes, Deutsch’s resolution of the Grandfather Paradox, and the implications of time travel for computational complexity: This is actually one of my better podcasts (and only 24 minutes long), so check it out! Here’s a podcast I did a […]...
    Shtetl-Optimized | 4 days ago
  • OpenAI Locks Down San Francisco Offices Following Alleged Threat From Activist
    A message on OpenAI’s internal Slack claimed the activist in question had expressed interest in “causing physical harm to OpenAI employees.”. OpenAI employees in San Francisco were told to stay inside the office on Friday afternoon after the company purportedly received a threat from an individual who was previously associated with the Stop AI activist group. “Our information indicates that...
    Effective Altruism Forum | 4 days ago
  • Book Review: Wizard's Hall
    Ever on the quilting goes, Spinning out the lives between, Winding up the souls of those Students up to one-thirteen. There's a book about a young boy whose mundane life is one day interrupted by a call to go to wizard boarding school, where he gets into youthful hijinks overshadowed by feats about a dark wizard.
    LessWrong | 4 days ago
  • Market Logic I
    Garrabrant Induction provides a somewhat plausible sketch of reasoning under computational uncertainty, the gist of which is "build a prediction market". An approximation of classical probability theory emerges. However, this is only because we assume classical logic.
    LessWrong | 4 days ago
  • You Too Can Write Thousands of Words Every Day
    Here is how
    Bentham's Newsletter | 4 days ago
  • I'll See It When I Believe It
    1.
    Fake Nous | 4 days ago
  • Cis people don't have to understand trans people
    Here is a complete list of everything that the average cis person needs to know about trans people:
    Thing of Things | 4 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.