Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • What's up with Anthropic predicting AGI by early 2027?
    As far as I'm aware, Anthropic is the only AI company with official AGI timelines : they expect AGI by early 2027. In their recommendations (from March 2025) to the OSTP for the AI action plan they say: As our CEO Dario Amodei writes in 'Machines of Loving Grace', we expect powerful AI systems will emerge in late 2026 or early 2027. Powerful AI systems will have the following properties:
    LessWrong | 55 minutes ago
  • Common Misconceptions About Anger?
    People often say things like the following about anger’s relationship to other emotions – but are they B.S.? They say: While there is debate about these ideas among people in the field, my opinion is that these statements are misleading and, in some cases, wrong. I think these statements can promote misunderstandings about the nature […]...
    Optimize Everything | 57 minutes ago
  • The Tale of the Top-Tier Intellect
    Once upon a time in the medium-small town of Skewers, Washington, there lived a 52-year-old man by the name of Mr. Humman, who considered himself a top-tier chess-player. Now, Mr. Humman was not generally considered the strongest player in town; if you asked the other inhabitants of Skewers, most of them would've named Mr. Neumann as their town's chess champion.
    LessWrong | 59 minutes ago
  • Leaving Open Philanthropy, going to Anthropic
    (Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.). Last Friday was my last day at Open Philanthropy. I’ll be starting a new role at Anthropic in mid-November, helping with the design of Claude’s character/constitution/spec.
    LessWrong | 2 hours ago
  • The Strategic Calculus of AI R&D Automation
    When AI automates AI development, the question shifts from ‘What can we build?’ to ‘What should we build first?’ As difficulty declines, differential value dominates.
    AI Prospects: Toward Global Goal Convergence | 3 hours ago
  • Trying to understand my own cognitive edge
    I applaud Eliezer for trying to make himself redundant, and think it's something every intellectually successful person should spend some time and effort on. I've been trying to understand my own "edge" or "moat", or cognitive traits that are responsible for whatever success I've had, in the hope of finding a way to reproduce it in others, but I'm having trouble understanding a part of it, and...
    LessWrong | 3 hours ago
  • The Unreasonable Effectiveness of Fiction
    [Meta: This is Max Harms. I wrote a novel about China and AGI, which comes out today. This essay from my fiction newsletter has been slightly modified for LessWrong.]. In the summer of 1983, Ronald Reagan sat down to watch the film War Games, starring Matthew Broderick as a teen hacker.
    LessWrong | 3 hours ago
  • Publishing academic papers on transformative AI is a nightmare
    I am a professor of economics. Throughout my career, I was mostly working on economic growth theory, and this eventually brought me to the topic of transformative AI / AGI / superintelligence. Nowadays my work focuses mostly on the promises and threats of this emerging disruptive technology.
    LessWrong | 3 hours ago
  • Leaving Open Philanthropy, going to Anthropic
    (Audio version, read by the author, here, or search for "Joe Carlsmith Audio" on your podcast app.). Last Friday was my last day at Open Philanthropy. I’ll be starting a new role at Anthropic in mid-November, helping with the design of Claude’s character/constitution/spec.
    Effective Altruism Forum | 3 hours ago
  • How and why you should make your home smart (it's cheap and secure!)
    Your average day starts with an alarm on your phone. Sometimes, you wake up a couple of minutes before it sounds. Sometimes, you find the button to snooze it. Sometimes, you’re already on the phone and it appears as a notification. But when you finally stop it, the lights in your room turn on and you start your day. You walk out of your room.
    LessWrong | 4 hours ago
  • Could a Garland Fund 2.0 Upend America Today?
    Editors’ Note: David Pozen continues HistPhil’s book forum on John Witt’s The Radical Fund: How a Band of Visionaries and a Million Dollars Upended America (Simon & Schuster, 2025). A version of this post originally appeared on the Balkinization blog, which is conducting a forum on Witt’s book as well, with some outstanding contributions by … Continue reading →...
    HistPhil | 4 hours ago
  • 🟩 Trump threatens to send troops to Nigeria and denies Venezuela attack plans, China-US trade détente || Global Risks Weekly Roundup #44/2025
    Iran is carrying out more construction in and around a mountainous nuclear site. The Rapid Support Forces (RSF) have captured the city of el-Fasher in Sudan.
    Sentinel | 4 hours ago
  • Leaving Open Philanthropy, going to Anthropic
    On a career move, and on AI-safety-focused people working at AI companies.
    Joe Carlsmith | 5 hours ago
  • Leftists want real jobs on the leftist commune
    This is one of the most pedantic posts I’ve ever written.
    Thing of Things | 5 hours ago
  • What's up with Anthropic predicting AGI by early 2027?
    I operationalize Anthropic's prediction of "powerful AI" and explain why I'm skeptical
    Redwood Research | 5 hours ago
  • Writing For The AIs
    Astral Codex Ten | 6 hours ago
  • Leaving Open Philanthropy, going to Anthropic
    On a career move, and on AI-safety-focused people working at AI companies. Text version here: https://joecarlsmith.com/2025/11/03/leaving-open-philanthropy-going-to-anthropic/
    Joe Carlsmith Audio | 6 hours ago
  • Blogging: A Balanced View
    It's not all doom and gloom
    Bentham's Newsletter | 6 hours ago
  • Macho Meals: How Masculinity Drives Men’s Meat Attachment
    What does masculinity have to do with meat? Men’s attachment to meat is tied to traditional masculine ideals, but reframing plant-based eating as strong and self-directed could help change that. The post Macho Meals: How Masculinity Drives Men’s Meat Attachment appeared first on Faunalytics.
    Faunalytics | 7 hours ago
  • A glimpse of the other side
    I like to wake up early to watch the sunrise. The sun hits the distant city first, the little sliver of it I can see through the trees. The buildings light up copper against the pale pink sky, and that little sliver is the only bit of saturation in an otherwise grey visual field. Then the sun starts to rise over the hill behind me.
    LessWrong | 7 hours ago
  • Recruitment is extremely important and impactful. Some people should be completely obsessed with it.
    Cross-post from Good Structures. Over the last few years, I helped run several dozen hiring rounds for around 15 high-impact organizations. I've also spent the last few months talking with organizations about their recruitment. I've noticed three recurring themes:
    Effective Altruism Forum | 7 hours ago
  • Feedback Loops Rule Everything Around Me
    Feeeeed meeee
    Atoms vs Bits | 9 hours ago
  • The EU AI Act Newsletter #89: AI Standards Acceleration Updates
    CEN and CENELEC have announced exceptional measures to speed up the development of European standards supporting the AI Act.
    The EU AI Act Newsletter | 9 hours ago
  • ChinAI #334: How AI is "Transforming" a Chinese University's Humanities Program
    Greetings from a world where…...
    ChinAI Newsletter | 10 hours ago
  • Compassion in World Farming Southern Africa is Hiring: Communications Officer
    Compassion in World Farming International is a global movement transforming the future of food and farming. We’re recruiting for a passionate and skilled Communications Officer to help amplify our voice and impact across Southern Africa. . Communications Officer – Southern Africa. Role Type: Contract until end of March 2026- Part-time 2 days per week. Location: South Africa - Remote.
    Animal Advocacy Forum | 11 hours ago
  • Open Thread 406
    Astral Codex Ten | 11 hours ago
  • You don’t need better boundaries. You need a better framework.
    Your Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. It’s based on value pluralism — the idea that each of us has multiple values that are equally valid but that often conflict with each other. To submit a question, fill out this anonymous form. Here’s this week’s question from a […]...
    Future Perfect | 12 hours ago
  • Why peanut butter is back on the kids’ menu
    If, like me, you’re a parent of a young child, there’s one thing you’ve come to fear above all else. (And no, it’s not “Golden” from KPop Demon Hunters played for the 10,000th time, though that’s a close second.). It’s the humble peanut. Even if your child isn’t allergic to the nuts, past surveys have […]...
    Future Perfect | 12 hours ago
  • Erasmus: Social Engineering at Scale
    Sofia Corradi, a.k.a. Mamma Erasmus (2020)When Sofia Corradi died on October 17th, the press was full of obituaries for the spiritual mother of Erasmus, the European student exchange programme, or, in the words of Umberto Eco, “that thing where a Catalan boy goes to study in Belgium, meets a Flemish girl, falls in love with her, marries her, and starts a European family.”.
    LessWrong | 12 hours ago
  • Community Health Workers Transform Reproductive Health Service Delivery in Wakiso District, Uganda
    The post Community Health Workers Transform Reproductive Health Service Delivery in Wakiso District, Uganda appeared first on Living Goods.
    Living Goods | 13 hours ago
  • "What's hard about this? What can I do about that?" (Recursive)
    Third in a series of short rationality prompts... . My opening rationality move is often "What's my goal?". It is closely followed by: "Why is this hard? And, what can I do about that?". If you're busting out deliberate "rationality" tools (instead of running on intuition or copying your neighbors), something about your situation is probably difficult.
    LessWrong | 13 hours ago
  • Most Irish Foreign Aid Never Leaves the Country
    But, weirdly, this is fine (for now)
    The Fitzwilliam | 14 hours ago
  • Overschakelen op kippenkweek: de grootste ramp in Vlaanderen
    Opiniestuk in De Standaard (3-11-2025) Vorige week lazen we in de krant een artikel dat niet al te veel ophef veroorzaakte. Niemand lijkt te beseffen dat het over veruit de grootste ramp in Vlaanderen gaat: West-Vlaamse boeren die overschakelen op … Lees verder →...
    The Rational Ethicist | 14 hours ago
  • Lack of Social Grace is a Lack of Skill
    I. I have claimed that one of the fundamental questions of rationality is “what am I about to do and what will happen next?” One of the domains I ask this question the most is in social situations. There are a great many skills in the world. If I had the time and resources to do so, I’d want to master all of them.
    LessWrong | 15 hours ago
  • The Case for DMT for Cluster Headaches: Practical Tips & Why It Deserves Urgent Scientific Attention
    Transcript: Using DMT to Abort Cluster Headaches May all beings be free from suffering, especially those who are trapped in hell. Welcome everybody. Today we’re going to talk about a pretty gnarly topic, but it’s a very important one. I think if we focus as a community and make direct, persistent action towards these goals, […]...
    Qualia Computing | 17 hours ago
  • From sand to solar tent
    From sand to solar tent gloireri Mon, 11/03/2025 - 05:42 . Moma, Mozambique – When Islova Alberto Aly decided to venture into fish drying, her primary aim was to generate an income to support her children's education.
    Global Alliance for Improved Nutrition | 18 hours ago
  • Against Subjectivism
    My second piece on summarizing Chappell’s summary of his summary of Parfit.
    Hauke’s Blog | 18 hours ago
  • Why I Transitioned: A Case Study
    An Overture. Famously, trans people tend not to have great introspective clarity into their own motivations for transition. Intuitively, they tend to be quite aware of what they do and don't like about inhabiting their chosen bodies and gender roles. But when it comes to explaining the origins and intensity of those preferences, they almost universally to come up short.
    LessWrong | 18 hours ago
  • Halfhaven halftime
    Halfhaven is a virtual blogger camp, an online alternative to Inkhaven Residency. The rules are simple: every day post max 1 article with min 500 words (or equivalent effort). try to get 30 by the end of November (but there are no hard lines). The invitation links keep expiring, the current one is: https://discord.gg/jrJPR3h6.
    LessWrong | 21 hours ago
  • Is vaping less harmful than smoking, and does it help people quit?
    Answers to some of the most frequently asked questions about vaping and its effects.
    Our World in Data | 23 hours ago
  • Gemini Siri 📱, SpaceX datacenters 🛰️, GitHub immutable releases 👨‍💻
    TLDR AI | 23 hours ago
  • Human Values ≠ Goodness
    There is a temptation to simply define Goodness as Human Values, or vice versa. Alas, we do not get to choose the definitions of commonly used words; our attempted definitions will simply be wrong. Unless we stick to mathematics, we will end up sneaking in intuitions which do not follow from our so-called definitions, and thereby mislead ourselves.
    LessWrong | 1 days ago
  • Weak-To-Strong Generalization
    I will be discussing weak-to-strong generalization with Sahil on Monday, November 3rd, 2025, 11am Pacific Daylight Time. You can join the discussion with this link. Weak-to-strong generalization is an approach to alignment (and capabilities) which seeks to address the scarcity of human feedback by using a weak model to teach a strong model.
    AI Alignment Forum | 1 days ago
  • FTL travel and scientific realism
    It's November! I'm not doing Inkhaven, or NaNoWriMo (RIP), or writing a short story every day, or quitting shaving or anything else. But I (along with some housemates) am going to try to write a blog post of at least 500 words every day of the month. (Inkhaven is just down the street a bit and I'm hoping to benefit from some kind of proximity effect.). Today: Llamamoe on Discord complains about.
    LessWrong | 1 days ago
  • The Biggest Unsolved Problem in Philosophy of Science
    Should we believe our scientific theories?
    Bentham's Newsletter | 1 days ago
  • But Does Social Media Use Actually Cause Bad Mental Health?
    It’s interesting how studies on the negative effects of social media on mental health are mixed: some find an effect, some don’t (or only find a very small effect). Some take this as proof that social media is actually fine for mental health. My hypothesis is different. I think that the effects of social media […]...
    Optimize Everything | 1 days ago
  • Reflections on 4 years of meta-honesty
    Honesty is quite powerful in many cases: if you have a reputation for being honest, people will trust you more and your words will have more weight (or so the argument goes). Unfortunately, being extremely honest all the time is also pretty difficult. What happens when the Nazis come knocking and ask if you have jews in the basement?
    LessWrong | 1 days ago
  • Will Welfareans Get to Experience the Future?
    Cross-posted from my website. Epistemic status: This entire essay rests on two controversial premises (linear aggregation and antispeciesism) that I believe are quite robust, but I will not be able to convince anyone that they're true, so I'm not even going to try.
    Effective Altruism Forum | 1 days ago
  • Networking is less dumb than you might think
    I used to think of networking somewhat like this:
    Thing of Things | 1 days ago
  • We can have growth while fighting climate change
    Climate stories usually start the same way: fire, flood, loss, collapse. The charts are grim. The vibes are worse. But there’s another story in the numbers that starts with what’s working, what’s already being built, and how far we’ve actually come. Hannah Ritchie is a data scientist at the University of Oxford and the author […]...
    Future Perfect | 1 days ago
  • We love dogs as family. We also experiment on them. Will it come to an end?
    Animal rights advocates often contrast humanity’s dismal treatment of animals farmed for food with our adoration bordering on worship of pet cats and dogs — the point being that these distinctions between animals that are equally sentient are arbitrary, hypocritical, and pointlessly cruel. The comparison makes an important point, but it also conceals a grimmer […]...
    Future Perfect | 1 days ago
  • Things I've Become More Confident About
    Last year, I wrote a list of things I’ve changed my mind on. But good truth-seeking doesn’t just require you to consider where you might be wrong; you must also consider where you might be right.
    Philosophical Multicore | 2 days ago
  • Contra Parfit’s ‘Against Egoism’
    My summary of Chappell’s summary of his summary of Parfit. And my commentary. Or: you should be egoistical because people are myopic and selfish.
    Hauke’s Blog | 2 days ago
  • A/B testing could lead LLMs to retain users instead of helping them
    OpenAI’s updates of GPT-4o in April 2025 famously induced absurd levels of sycophancy: the model would agree with everything users would say, no matter how outrageous.
    AI Safety Takes | 2 days ago
  • Why Is Printing So Bad?
    Last time I printed a document, I wrote down the whole process: Open settings and look at list of printers; David tells me which printer I should use. Go to print dialogue; don’t see the relevant printer. Go back to settings and hit buttons which sound vaguely like they’ll add/install something. Go back to print dialogue, realize the printer I wanted had probably been there already and I...
    LessWrong | 2 days ago
  • You’re always stressed, your mind is always busy, you never have enough time
    You have things you want to do, but there’s just never time. Maybe you want to find someone to have kids with, or maybe you want to spend more or higher-quality time with the family you already have. Maybe it’s a work project.
    LessWrong | 2 days ago
  • Ways we can fail to answer
    In what ways can we can fail to answer a question?. (I mean necessarily fail: actual barriers to knowledge, rather than skill issue hurdles. But of course contingent failures are much more common: “We didn’t ask the question in the first place”, or “We didn’t have the particular insight that would have allowed for productive research”, or “We didn’t manage to remove every cognitive bias”, or...
    argmin gravitas | 2 days ago
  • Re-rolling environment
    I'm currently on a "rationality as 'skills you practice'" kick. I'm really into subtle cognitive skills. I do think they eventually pay off. But, realistically, if you have a major problem in your life, my experience is that the biggest effect sizes come from radically changing your environment. Move in with new roommates. Get a house in a new neighborhood. Get a new romantic partner. Get...
    LessWrong | 2 days ago
  • Humanizing Expected Value
    I often use the following thought experiment as an intuition pump to make the ethical case for taking expected value, risk/uncertainty-neutrality, and math in general seriously in the context of doing good. There are, of course, issues with pure expected value maximization (e.g. Pascal’s Mugging, St. Petersburg paradox and infinities, etc), that I won’t go into in this post.
    Effective Altruism Forum | 2 days ago
  • Supervillain Monologues Are Unrealistic
    Supervillain monologues are strange. Not because a supervillain telling someone their evil plan is weird. In fact, that's what we should actually expect. No, the weird bit is that people love a good monologue. Wait, what?. OK, so why should we expect supervillains to tell us their master plan?
    LessWrong | 2 days ago
  • LLM-generated text is not testimony
    Crosspost from my blog. Synopsis. When we share words with each other, we don't only care about the words themselves. We care also—even primarily—about the mental elements of the human mind/agency that produced the words. What we want to engage with is those mental elements. As of 2025, LLM text does not have those elements behind it.
    LessWrong | 2 days ago
  • Boletín de noviembre de 2025
    🚀 Las últimas novedades de la comunidad de AE...
    Altruismo eficaz | 2 days ago
  • No Need for Explanation
    Here, I explain why Phenomenal Conservatism is better than Phenomenal Explanationism. *
    Fake Nous | 2 days ago
  • A Very Disturbing Moral Argument
    How morality might be inverted
    Bentham's Newsletter | 2 days ago
  • #84 – Dean Spears on the Case for People
    Dean Spears is an an Economic Demographer, Development Economist, and Associate Professor of Economics at the University of Texas at Austin. With Michael Geruso, Dean is the co-author of After the Spike: Population, Progress, and the Case for People. You can see a full transcript and a list of resources on the episode page on our website. We're back from a hiatus!
    Hear This Idea | 2 days ago
  • Factory Farming is a Blight
    The practices of industrialized animal farming are aesthetically and morally revolting. These practices can be phased out. The post Factory Farming is a Blight appeared first on Palladium.
    Palladium Magazine Newsletter | 2 days ago
  • The Ozempic effect is finally showing up in obesity data
    For years, obesity rates in the US have gone in one direction: up. From the first year it was launched, Gallup’s National Health and Well-Being Index has found that the share of US adults reporting obesity has climbed and climbed, rising from 25.5 percent in 2008 to 39.9 percent in 2022. That survey caught the […]...
    Future Perfect | 2 days ago
  • Why you should write blog posts and not be a blogger
    I am doing Inkhaven this month, which means that (if all goes well) you’ll get to hear from me every day for the next 30 days.
    Thing of Things | 2 days ago
  • Helping people spot misinformation
    And the greatest gift psychology gave the world
    Reasonable People | 3 days ago
  • 63rd Pugwash Conference in Hiroshima, “80 Years After the Atomic Bombing – Time for Peace, Dialogue and Nuclear Disarmament”
    From 1-5 November 2025, Hiroshima hosts the 63rd Pugwash Conference. We are honoured to be meeting in Hiroshima to commemorate … More...
    Pugwash Conferences on Science and World Affairs | 3 days ago
  • Will Welfareans Get to Experience the Future?
    Epistemic status: This entire essay rests on two controversial premises (linear aggregation and antispeciesism) that I believe are quite robust, but I will not be able to convince anyone that they’re true, so I’m not even going to try. Cross-posted to the Effective Altruism Forum.
    Philosophical Multicore | 3 days ago
  • Every Forum Post on EA Career Choice & Job Search
    TLDR: I went through the entirety of the career choice, career advising, career framework, career capital, working at EA vs. non-EA orgs, and personal fit tags and have filtered to arrive at a list of all posts relevant to figuring out the career aspect of the EA journey up until now (10/25/25).
    Effective Altruism Forum | 3 days ago
  • Face à l'IA et la Désinformation - avec Tim (Bienfaits Pour Tous)
    ⚠️ Découvrez du contenu EXCLUSIF (pas sur la chaîne) ⚠️ ⇒ https://the-flares.com/y/bonus/ ⬇️⬇️⬇️ Infos complémentaires : sources, références, liens... ⬇️⬇️⬇️ Le contenu vous intéresse ? Abonnez-vous et cliquez sur la 🔔 Vous avez aimé cette vidéo ? Pensez à mettre un 👍 et à la partager.
    The Flares | 3 days ago
  • Can AI systems introspect?
    A reflection on a selection of results by the exceptional Jack Lindsey
    Experience Machines | 3 days ago
  • On keeping a packed suitcase
    This Halloween, I didn’t need anything special to frighten me. I walked all day around in a haze of fear and depression, unable to concentrate on my research or anything else. I saw people smiling, dressed up in costumes, and I thought: how? The president of the Heritage Foundation, the most important right-wing think tank […]...
    Shtetl-Optimized | 3 days ago
  • Anthropic's Pilot Sabotage Risk Report
    As practice for potential future Responsible Scaling Policy obligations, we're releasing a report on misalignment risk posed by our deployed models as of Summer 2025. We conclude that there is very low, but not fully negligible, risk of misaligned autonomous actions that substantially contribute to later catastrophic outcomes.
    LessWrong | 3 days ago
  • The Epoch AI Brief - October 2025
    Report on decentralized training, new Epoch Capabilities Index for tracking AI progress, FrontierMath evaluations of leading models, revenue insights on OpenAI, and hiring for two open positions.
    Epoch Newsletter | 3 days ago
  • AI, Animals & Digital Minds NYC 2025: Retrospective
    Our Mission: Rapidly scale up the size and influence of the community trying to make AI and other transformative technologies go well for sentient nonhumans. One of the key ways we do this is through our events. This article gives insight into our most recent event, AI, Animals and Digital Minds NYC 2025 including: Lightning talks. Topics and ideas covered. Attendee feedback.
    Effective Altruism Forum | 3 days ago
  • Talk to the City Case Study: Amplifying Youth Voices in Australia
    YWA deployed Talk to the City (T3C) as one of their research platforms, introducing an innovative interface that bridged the gap between large-scale data collection and qualitative insight. The platform's key innovation lay in its interactive visualization interface, which allowed researchers to dynamically explore relationships between different data points while maintaining direct...
    AI Objectives Institute | 3 days ago
  • Animal Equality updates & vacancies - October 2025
    Hello everyone! Here you can find the October updates from Animal Equality. We hope that you find it helpful and inspiring. We also include our current job openings. Thank you!. Global. Animal Equality concluded seven weeks of protests in the Netherlands against grocery giant Ahold Delhaize, demanding an end to the use of cages for hens in its U.S. supply chain.
    Animal Advocacy Forum | 3 days ago
  • Good tokens 2025-10-31
    Spooky tokens
    Atoms vs Bits | 3 days ago
  • The fusing of AI firms and the state is leading to a dangerous concentration of power
    With all of this in mind, Hard Reset spoke with researcher Sarah West, the co-executive director of a think tank advocating for an AI that benefits the public interest, not just a select few. We discuss this consolidation of power among a few AI players—and how the government is actually hindering the development of healthier competition and consumer-friendly AI products, while flirting with...
    AI Now Institute | 3 days ago
  • The Destruction in Gaza Is What the Future of AI Warfare Looks Like
    “AI systems, and generative AI models in particular, are notoriously flawed with high error rates for any application that requires precision, accuracy, and safety-criticality,” Dr. Heidy Khlaaf, chief AI scientist at the AI Now Institute, told Gizmodo. “AI outputs are not facts; they’re predictions.
    AI Now Institute | 3 days ago
  • ChatGPT safety systems can be bypassed to get weapons instructions
    “That OpenAI’s guardrails are so easily tricked illustrates why it’s particularly important to have robust pre-deployment testing of AI models before they cause substantial harm to the public,” said Sarah Meyers West, a co-executive director at AI Now, a nonprofit group that advocates for responsible and ethical AI usage.
    AI Now Institute | 3 days ago
  • The Rise and Fall of Nvidia’s Geopolitical Strategy
    China’s Cyberspace Administration last month banned companies from purchasing Nvidia’s H20 chips, much to the chagrin of its CEO Jensen Huang. This followed a train wreck of events that unfolded over the summer. The post The Rise and Fall of Nvidia’s Geopolitical Strategy appeared first on AI Now Institute.
    AI Now Institute | 3 days ago
  • Want to grow your group’s impact? Apply to the Organiser Support Programme (OSP)!
    Want to grow your group’s impact? Apply to the Organiser Support Programme (OSP)! Applications have opened for CEA’s mentorship program, the online conference EA Connect is happening soon, and other opportunities! Subscribe - Unsubscribe - Newsletter Archives - View in your browser Effective Altruism South Africa's annual...
    EA Groups Newsletter | 3 days ago
  • Newsletter #4
    More useful AI stuff.
    Peter Hartree | 3 days ago
  • From Open Science to AI: Benchmarking LLMs on Reproducibility, Robustness, and Replication
    At the Center for Open Science (COS), our work is about making research more transparent, rigorous, and verifiable. As AI tools enter the research workflow, we need evidence about what they actually contribute to scientific credibility.
    Center for Open Science | 3 days ago
  • Open positions to grow our podcast team
    The post Open positions to grow our podcast team appeared first on 80,000 Hours.
    80,000 Hours | 3 days ago
  • FAQ: Expert Survey on Progress in AI methodology
    Context
    AI Impacts | 3 days ago
  • The markets aren’t bracing for an AI crash — yet
    Transformer Weekly: Blackwell chips and China, White House warning to pro-AI super pac, and OpenAI’s restructure...
    Transformer | 3 days ago
  • Linkpost for October
    Effective Altruism
    Thing of Things | 3 days ago
  • How Much Should We Spend on Scientific Replication?
    A data-driven framework for targeting replication funding where it matters most
    Institute for Progress | 3 days ago
  • What You Need to Refute Arguments With Astronomical Stakes
    A merely pretty good rebuttal isn't enough
    Bentham's Newsletter | 3 days ago
  • Third Latin American Congress of Animal Law, Arba 2025
    Hello FAST community. I hope you're all doing great!. We would like to invite you to register for our Third Latin American Congress of Animal Law, Arba 2025, which will be held virtually on Thursday, November 13, from 5:00 p.m. to 9:00 p.m. (Peruvian time). We believe that holding this third congress represents a significant achievement for the region.
    Animal Advocacy Forum | 3 days ago
  • Sarah Paine – How Russia sabotaged China's rise
    Plus, where Russia and China go from here
    The Lunar Society | 3 days ago
  • Animal Welfare Legislation In The European Union: A Call For Consistency
    Across the European Union, member states have taken different approaches to animal welfare legislation, causing gaps in protection that leave farmed animals vulnerable to harmful systems and practices. The post Animal Welfare Legislation In The European Union: A Call For Consistency appeared first on Faunalytics.
    Faunalytics | 3 days ago
  • Compassion in World Farming is Hiring: Senior Digital Campaigns Coordinator
    Compassion in World Farming International is a global movement transforming the future of food and farming. We’re recruiting for a creative and driven part-time Senior Digital Campaigns Coordinator to help us mobilise public support and influence decision-makers through compelling digital campaigns. .
    Animal Advocacy Forum | 3 days ago
  • How health is financed in LMICs—and why it matters for doing good | Peter Koziol | EAG London: 2025
    How health is financed in low- and middle-income countries (LMICs) determines the effectiveness and reach of global health efforts. In this talk, Peter Koziol explores the key dynamics of health financing—from the scale of different funding sources and the roles of major actors, to how these factors impact specific diseases—revealing what this means for those aiming to maximize their positive...
    Centre for Effective Altruism | 3 days ago
  • Holden Karnofsky on dozens of amazing opportunities to make AI safer — and all his AGI takes
    By Robert Wiblin | Watch on Youtube | Listen on Spotify | Read transcript. Episode summary. Whatever your skills are, whatever your interests are, we’re out of the world where you have to be a conceptual self-starter, theorist mathematician, or a policy person — we’re into the world where whatever your skills are, there is probably a way to use them in a way that is helping make maybe...
    Effective Altruism Forum | 3 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.