Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Climate Activists Need to Radically Change Their Approach Under Trump
    Donald Trump’s return to Washington might seem like a terrifying moment for the fight against climate change. The incoming administration is not just hostile to the energy transition; it’s also expected to pull the United States out of the Paris climate agreement, roll back a wide range of pollution regulations and promote domestic fossil fuel production — decisions likely to worsen global...
    Institute for Progress | 6 hours ago
  • (Linkpost) METR: Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
    This seems like an important piece of work - an RCT on the use of AI capabilities for developers. The TL;DR of the paper is. The devil is in the details of course. This post by Steve Newman does a good job of working through some of them. I have highlighted some considerations from it: The methodology was as follows:
    Effective Altruism Forum | 6 hours ago
  • Explore AI and consciousness with the Berggruen Prize
    A $50,000 first place prize for essays exploring consciousness
    The Power Law | 7 hours ago
  • Victory: Super Festval supermarket goes crate-free in Brazil!
    Hi all!. We’re happy to share that Super Festval, a supermarket brand part of Grupo Beal (former “Companhia Beal de Alimentos’), has officially published a commitment to exclusively source pork from group housing systems during gestation in Brazil by 2028, considering preferably preimplantation systems where sows are housed in stalls for no longer than 7 days. You can read the announcement in...
    Animal Advocacy Forum | 8 hours ago
  • Can for-profit companies create significant, direct impact?
    Thank you to @Jacintha Baas, @Judith Rensing, and the @CE team for your help in editing and improving this post. Introductory Context. Hi, I’m Trish. This is my first post on the EA forum.
    Effective Altruism Forum | 9 hours ago
  • Your Review: Of Mice, Mechanisms, and Dementia
    Finalist #3 in the Review Contest
    Astral Codex Ten | 9 hours ago
  • What the EU code of practice actually requires
    Transformer Weekly: SB 53’s revamp, Peter Kyle on AGI, and a movie about Ilya Sutskever...
    Transformer | 9 hours ago
  • Against Therapy Speak
    Just talk like a person!
    Bentham's Newsletter | 10 hours ago
  • Book Review: The Artist's Way
    It is a truth universally acknowledged that a rationalist, in possession of a vague understanding of Bayes’ Theorem, must be in want of some woo.
    Thing of Things | 10 hours ago
  • Cage-Free Housing For Japanese Quails
    The welfare needs of Japanese quails are understudied compared to other farmed birds. What do we know, and what more can we learn?. The post Cage-Free Housing For Japanese Quails appeared first on Faunalytics.
    Faunalytics | 10 hours ago
  • Vibe Bias
    Check Your Dialectical Privilege
    Good Thoughts | 10 hours ago
  • Is This Anything? 8
    no attempts
    Atoms vs Bits | 11 hours ago
  • What Happens After Superintelligence? (with Anders Sandberg)
    Anders Sandberg joins me to discuss superintelligence and its profound implications for human psychology, markets, and governance. We talk about physical bottlenecks, tensions between the technosphere and the biosphere, and the long-term cultural and physical forces shaping civilization.
    Future of Life Institute | 13 hours ago
  • Grok’s MechaHitler disaster is a preview of AI disasters to come
    From the beginning, Elon Musk has marketed Grok, the chatbot integrated into X, as the unwoke AI that would give it to you straight, unlike the competitors. But on X over the last year, Musk’s supporters have repeatedly complained of a problem: Grok is still left-leaning. Ask it if transgender women are women, and it […]...
    Future Perfect | 13 hours ago
  • When people say AI isn’t finding real world application, I wonder what planet they’re on
    7 charts about AI deployment
    Benjamin Todd | 13 hours ago
  • Stian Westlake on the intangible economy and paying for social science
    Episode two of The Works in Progress Podcast is out now
    The Works in Progress Newsletter | 13 hours ago
  • The Rising Premium of Life
    I'm interested in a simple question: Why are people all so terrified of dying? And have people gotten more afraid? (Answer: probably yes!). In some sense, this should be surprising: Surely people have always wanted to avoid dying? But it turns out the evidence that this preference has increased over time is quite robust.
    Effective Altruism Forum | 14 hours ago
  • How to Choose the Best Animal Charities to Donate To
    When it comes to helping animals, not all charitable donations are created equal. While every act of giving comes from a place of compassion, the reality is that some animal charities can accomplish dramatically more good with the same dollar amount than others. Understanding how to identify these high-impact opportunities can transform your giving from […]...
    Animal Advocacy Careers | 17 hours ago
  • How to Choose the Best Animal Charities to Donate To
    When it comes to helping animals, not all charitable donations are created equal. While every act of giving comes from a place of compassion, the reality is that some animal charities can accomplish dramatically more good with the same dollar amount than others. Understanding how to identify these high-impact opportunities can transform your giving from […]...
    Animal Advocacy Careers | 17 hours ago
  • Kisumu County and Living Goods Launch Bold New Chapter in Community Health
    The post Kisumu County and Living Goods Launch Bold New Chapter in Community Health appeared first on Living Goods.
    Living Goods | 18 hours ago
  • Major win for animals in Brazil
    Today marks a historic win for animal advocacy in Brazil, thanks to the combined efforts of Humane World for Animals, Te Protejo,, Fórum Animal and Change.org. On July 9, 2025, Brazil’s Chamber of Deputies approved the Senate substitute for PL 6602/13 (now PL 3062/22), banning federal animal testing for cosmetics, personal hygiene products, and perfumes. Why this is huge: •⁠ ⁠1.6 million+...
    Animal Advocacy Forum | 18 hours ago
  • Lessons from the Iraq War for AI policy
    I think the 2003 invasion of Iraq has some interesting lessons for the future of AI policy. (Epistemic status: I’ve read a bit about this, talked to AIs about it, and talked to one natsec professional about it who agreed with my analysis (and suggested some ideas that I included here), but I’m not an expert.). For context, the story is:
    Effective Altruism Forum | 1 days ago
  • Foldable iPhone production 📱, Tesla Robotaxi expands 🚕, AI dev at Google 🧑‍💻
    TLDR AI | 1 days ago
  • Podcast Episode 7: Deepening GiveWell’s Focus on Livelihoods Programs
    GiveWell has long grappled with fundamental questions about how to value different positive impacts and make funding decisions across diverse programs. In particular, how much more valuable it is to save a life than to substantially improve it? And how can we prioritize between programs that achieve those outcomes in different measures when there’s no “right” answer to that question?.
    GiveWell | 1 days ago
  • Utilizing local food and preventing food waste in Indonesia’s School Meal Program
    Utilizing local food and preventing food waste in Indonesia’s School Meal Program gloireri Thu, 07/10/2025 - 21:34 Representatives from the National Nutrition Agency, National Food Agency, GAIN, I-PLAN, and invitees of the seminar on Environmental Perspectives of the school meal Program, 27 May 2025 hosted by the National Food Agency, I-PLAN and GAIN.
    Global Alliance for Improved Nutrition | 1 days ago
  • Foundations vs. Funds
    When writing about accountability in the NGO space I ended up concluding that funds are a more accountable mechanism than foundations.
    Measured Life | 1 days ago
  • Career Conversations Week: July 21-27
    TLDR: July 21-27 will be a themed week on the EA Forum for posting career-related quick takes and posts. We'll also have career advisors available throughout the week to answer questions. Last time we ran a career conversations week, it led to some great posts, including: Mistakes, flukes, and good calls I made in my multiple careers - Catherine Low.
    Effective Altruism Forum | 1 days ago
  • White Box Control at UK AISI - Update on Sandbagging Investigations
    Introduction. Joseph Bloom, Alan Cooney. This is a research update from the White Box Control team at UK AISI. In this update, we share preliminary results on the topic of sandbagging that may be of interest to researchers working in the field.
    AI Alignment Forum | 1 days ago
  • Evaluating and monitoring for AI scheming
    As AI models become more sophisticated, a key concern is the potential for “deceptive alignment” or “scheming”. This is the risk of an AI system becoming aware that its goals do not align with human instructions, and deliberately trying to bypass the safety measures put in place by humans to prevent it from taking misaligned action.
    AI Alignment Forum | 1 days ago
  • Linkpost: Redwood Research reading list
    I wrote a reading list for people to get up to speed on Redwood’s research: Section 1 is a quick guide to the key ideas in AI control, aimed at someone who wants to get up to speed as quickly as possible.
    AI Alignment Forum | 1 days ago
  • Linkpost: Guide to Redwood's writing
    I wrote a guide to Redwood’s writing: Section 1 is a quick guide to the key ideas in AI control, aimed at someone who wants to get up to speed as quickly as possible. Section 2 is an extensive guide to almost all of our writing related to AI control, aimed at someone who wants to gain a deep understanding of Redwood’s thinking about AI risk. Reading Redwood’s blog posts has been formative...
    AI Alignment Forum | 1 days ago
  • The bitter lesson of misuse detection
    TL;DR: We wanted to benchmark supervision systems available on the market—they performed poorly. Out of curiosity, we naively asked a frontier LLM to monitor the inputs; this approach performed significantly better. However, beware: even when an LLM flags a question as harmful, it will often still answer it. Full paper is available here. Abstract.
    AI Alignment Forum | 1 days ago
  • Reflections on Brainrot
    memetics are destiny
    AI Safety Takes | 1 days ago
  • Development Manager Opening - Nonhuman Rights Project
    Hi all! We are hiring a Development Manager at the Nonhuman Rights Project. Sharing some information below with additional information available on our website. Happy to answer questions - please help us spread the word! Thank you!. About the Nonhuman Rights Project.
    Animal Advocacy Forum | 1 days ago
  • Stephen Kotkin — How Stalin Became the Most Powerful Dictator in History
    The most consequential figure of the 20th century
    The Lunar Society | 1 days ago
  • Would this food label change how you eat?
    Imagine, for a moment, that you’re seated and ready to dine at one of Switzerland’s many celebrated high-end eateries, where a prix fixe meal can run around $400. On the menu, the slow-cooked Schweinsfilet, or pork tenderloin, comes with a bizarre and disturbing disclosure: The pigs raised to make that meal were castrated without pain […]...
    Future Perfect | 1 days ago
  • Fish Feel Pain
    Comprehensively rebutting Defending Feminism, Key, and the other fish pain skeptics
    Bentham's Newsletter | 1 days ago
  • Why Are We All Cowards?
    How We Learned to Start Worrying and Fear Everything
    Linch Zhang | 1 days ago
  • Kids And Parents Have Mixed Views On Alternative Proteins
    Focus groups with Singapore families reveal that children are more curious about alternative proteins than their parents, who worry about naturalness. The post Kids And Parents Have Mixed Views On Alternative Proteins appeared first on Faunalytics.
    Faunalytics | 1 days ago
  • GCBR Organization Updates: July 2025
    Updates from CEPI, Brown Pandemic Center, CHS, Sentinel Bio, Blueprint Biosecurity, 1DaySooner, CSR, Asia CHS, Harvard CCDD, SecureBio, CLTR and IBBIS
    GCBR Organization Updates | 1 days ago
  • Should we pivot from our current ToC?
    Should we pivot from our current ToC? Thats one of the big questions we ask ourselves in our self-impact assessment View this email in your browser Should we pivot from our current ToC?. Hello readers, welcome to the Animal Ask newsletter, which is sent out every other month.
    Animal Ask’s Newsletter | 1 days ago
  • The Future Society’s Response to the EU’s Code of Practice for General-Purpose AI
    Read our response to the Safety and Security chapter of the final Code of Practice for governing general-purpose AI, published by the European AI Office after a nine-month process involving more than 1,400 stakeholders. The post The Future Society’s Response to the EU’s Code of Practice for General-Purpose AI appeared first on The Future Society.
    The Future Society | 1 days ago
  • Notes On Northern Italy
    Why Italy is less woke than Germany and other observations
    Maximum Progress | 2 days ago
  • Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
    We conduct a randomized controlled trial (RCT) to understand how early-2025 AI tools affect the productivity of experienced open-source developers working on their own repositories. Surprisingly, we find that when developers use AI tools, they take 19% longer than without—AI makes them slower.
    METR | 2 days ago
  • New breakthrough in malaria treatment for newborn babies and young infants
    This week, Medecines for Malaria Venture (MMV) and pharmaceutical company Novartis announced a new breakthrough in malaria. Novartis has received approval for the first malaria medicine for newborn babies and young infants. The positive decision from Swissmedic under a special global health initiative is now expected to lead to rapid approvals across eight African countries. […].
    Target Malaria | 2 days ago
  • Hidden Open Thread 389.5
    Astral Codex Ten | 2 days ago
  • What is NZ's comparative advantage?
    Living in New Zealand can sometimes feel like an obstacle to doing good effectively. We’re far from the EA hubs in the UK/US, and from opportunities for direct work on global health & development. But are there any areas where we might have an advantage? Here are some possibilities that the community has been discussing recently: As a low-risk environment for AI training.
    Effective Altruism Forum | 2 days ago
  • If You Care About Justice or Fairness
    This isn’t about feeling guilty — it’s about using the lottery of birth to make the world a bit fairer. Giving effectively is how we turn luck into justice. Link in bio.
    Giving What We Can | 2 days ago
  • Opinion | Abhijit Banerjee, prix Nobel : «Nous, économistes du développement, n’avons pas fait un assez bon travail pour dire haut et fort l’importance de l’aide»
    Opinion | Abhijit Banerjee, prix Nobel : «Nous, économistes du développement, n’avons pas fait un assez bon travail pour dire haut et fort l’importance de l’aide» In a recent interview with Liberation, Nobel Prize-winning development economist Abhijit Banerjee reflects on the field of development economics and its advocacy efforts.
    J-PAL | 2 days ago
  • 10 of Founders Pledge's biggest grants
    This is a crosspost of a recent post by Hannah Yang, FP Research's comms lead, on Founders Pledge's website. I thought it was important to post this on the Forum because it's unclear to me what people know about FP, and because I often encounter various misconceptions — the three most common being that we focus only on climate (we don't), that we're driven primarily by member interests (we're...
    Effective Altruism Forum | 2 days ago
  • The relative range heuristic
    This is a rough explanation of relative ranges, a heuristic that I've found very helpful for quickly comparing two options that trade off between two dimensions. Consider the following examples of tradeoffs: Should we prioritize helping small animals or large animals?
    Effective Altruism Forum | 2 days ago
  • Letter to the Editor: Industrial animal agriculture is fueling the next pandemic
    Dr. Deonandan’s recent op-ed in the Ottawa Citizen (“Cancelled funding for mRNA flu vaccine a global mistake,” June 2) rightly underscores the urgency of preparing for H5N1, but we also need to confront the root cause of this growing threat: industrial animal agriculture. High-density factory farms — especially egg and poultry facilities — create an […].
    Mercy for Animals | 2 days ago
  • Can we safely deploy AGI if we can't stop MechaHitler?
    We need to see this as a canary in the coal mine
    The Power Law | 2 days ago
  • How to run SWE-bench Verified in one hour on one machine
    We are releasing a public registry of Docker images for SWE-bench, to help the community run more efficient and reproducible SWE-bench evaluations. By making better use of layer caching, we reduced the total size of the registry to 67 GiB for all 2290 SWE-bench images (10x reduction), and to 30 GiB for 500 SWE-bench Verified images (6x reduction).
    Epoch Blog | 2 days ago
  • My response to AI 2027
    Vitalik Buterin | 2 days ago
  • X CEO resigns 💼, Grok 4 🤖, Nvidia hits $4T 💰
    TLDR AI | 2 days ago
  • LIC and AO secure foie gras ban from historic DC butcher in humane labeling false claims case
    We’re thrilled to share a victory for animals and truth in advertising! Thanks to Animal Outlook’s lawsuit, filed with the incredible support of Legal Impact for Chickens (LIC), the nearly 100-year-old DC butcher shop, Harvey’s Market, has agreed to stop selling foie gras forever.
    Animal Advocacy Forum | 2 days ago
  • Forecasting America
    Some new American AI models and a new America Party
    Above the Fold | 2 days ago
  • What We’ve Learned from Our First Lookbacks
    At GiveWell, we're committed to understanding the impact of our grantmaking and improving our decisions over time. That's why we've begun conducting "lookbacks"—reviews of past grants, typically two to three years after making them, that assess how well they've met our initial expectations and what we can learn from them. We conduct lookbacks for two main reasons: accountability and learning.
    GiveWell | 2 days ago
  • AI Rights for Human Safety (with Peter Salib and Simon Goldstein)
    Peter Salib is an assistant professor of law at the University of Houston, and Simon Goldstein is an associate professor of philosophy at the University of Hong Kong. We discuss their paper ‘AI Rights for Human Safety’. To see all our published research, visit forethought.org/research.
    ForeCast | 2 days ago
  • 80,000 Hours is producing AI in Context — a new YouTube channel. Our first video, about the AI 2027 scenario, is up!
    About the program. Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program , and we’re so excited to tell you about it!.
    Effective Altruism Forum | 2 days ago
  • What will the IMO tell us about AI math capabilities?
    Most discussion about AI and the IMO focuses on gold medals, but that's not the thing to pay most attention to.
    Epoch Newsletter | 2 days ago
  • What's worse, spies or schemers?
    Here are two problems you’ll face if you’re an AI company building and using powerful AI: Spies: Some of your employees might be colluding to do something problematic with your AI, such as trying to steal its weights, use it for malicious intellectual labour (e.g.
    AI Alignment Forum | 2 days ago
  • I wish this was just fiction
    How close, actually, is superintelligence? Will we make the right choices as it arrives? Let me take you through AI 2027, the rigorous month-by-month forecast that sent shockwaves from D.C. to Silicon Valley.
    AI In Context | 2 days ago
  • A night sky full of living worlds
    A conversation with Astera Resident Edwin Kite on applied planetary science, applied astrobiology, and what it would take to terraform Mars
    Human Readable | 2 days ago
  • What 20,000 People Taught Us About Entrepreneurship and Startups
    Could you start a successful company? Years ago, we wanted to help people answer this question about themselves. So, we extensively...
    Clearer Thinking | 2 days ago
  • We’re Not Ready For Superintelligence
    AI 2027 depicts a possible future where artificial intelligence radically transforms the world in just a few intense years. It’s based on detailed expert forecasts — but how much of it will actually happen? Are we really racing towards a choice between a planet controlled by the elite, or one where humans have lost control entirely? My takeaway?
    AI In Context | 2 days ago
  • What's worse, spies or schemers?
    And what if you have both at once?
    Redwood Research | 2 days ago
  • Against The Good Guy License To Kill
    Morality doesn't have special carveouts for countries in Europe and North America
    Bentham's Newsletter | 2 days ago
  • Advocacy in a post-AI world, apply to TED, and halal food misconceptions
    Your farmed animal advocacy update for early July 2025
    Hive | 2 days ago
  • Research Round-Up: A Guide On Engaging The Media For Writers And Communicators
    This collection of resources offers valuable insights to inform media strategies for animal advocacy writers, communications managers, journalists, op-ed writers, editors, and more. The post Research Round-Up: A Guide On Engaging The Media For Writers And Communicators appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Fuck willpower
    Winners take shortcuts
    Useful Fictions | 2 days ago
  • Half Of U.S. Companion Animal Guardians Face Barriers To Veterinary Care
    Millions of companion animals in the U.S. aren’t receiving the care they need — and the consequences can be devastating. The post Half Of U.S. Companion Animal Guardians Face Barriers To Veterinary Care appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • How Light Could Revolutionize Medical Diagnostics | Mary Lou Jepsen
    “What if you could diagnose stroke, treat cancer, and cure depression with a smartphone‑sized device that costs $1,000 instead of millions?”...
    Foresight Institute | 2 days ago
  • Opinion | Esther Duflo, Nobel laureate: 'Development aid is not a waste of public money'
    Opinion | Esther Duflo, Nobel laureate: 'Development aid is not a waste of public money' As international solidarity wanes, the 2018 winner of the Nobel in Economics points out that development aid is not a mere crutch, but an investment: Increasingly strong research networks, including in struggling countries, know how to assess needs and identify solutions for overcoming hardship.
    J-PAL | 2 days ago
  • Let People Know When You're Doing Them A Favor
    It's nicer to let them know
    Atoms vs Bits | 2 days ago
  • EA Forum Digest #248
    EA Forum Digest #248 Hello!. We’re running a themed week, career conversations week, from July 21–27. It’s a time to post about your job, your thoughts on impactful careers, and share any advice that’s helped you in your career. Use this link to add this and future events to your Google Calendar. — Toby (for the Forum team) We recommend:
    EA Forum Digest | 2 days ago
  • What's new in biology: summer 2025
    The first gonorrhea vaccination program, contact lenses that see infrared light, the protein behind sweet tastes, a baby cured with gene therapy, and more
    The Works in Progress Newsletter | 3 days ago
  • My kidney donation
    I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under.
    Effective Altruism Forum | 3 days ago
  • Practically-A-Book Review: Byrnes on Trance
    Astral Codex Ten | 3 days ago
  • Time, Bits, and Nickel
    Managing digital and analog continuity
    Long Now Foundation | 3 days ago
  • The AI 2027 scenario and what it means: a video tour
    The post The AI 2027 scenario and what it means: a video tour appeared first on 80,000 Hours.
    80,000 Hours | 3 days ago
  • Welcome to AI In Context
    AI moves fast. Let's get up to speed.
    AI In Context | 3 days ago
  • My new book — Clearing the Air — is coming out in September!
    A Hopeful Guide to Solving Climate Change — in 50 Questions and Answers.
    Sustainability by Numbers | 3 days ago
  • New feature: embed archived versions of our interactive charts in your website
    Learn more about different options for embedding our interactive charts.
    Our World in Data | 3 days ago
  • Method Efficacy: Finding Effective Solutions to Important Problems
    No matter how significant a problem may be, an ineffective method will do little to solve it. And if you’re dedicating so much of your time, energy, talents and attention to improving the world, we want it to count. Method efficacy is how well a solution actually works. Finding a... Read more...
    Probably Good | 3 days ago
  • Personal Fit: The Importance of Your Own Skills and Experience
    Building an impactful career isn’t just about chasing roles that sound impactful on paper—it’s about finding where you can do your best work. Your own unique skills, experiences, and motivations mean that an impactful career for you could look very different to someone else’s. Finding a role where you have... Read more...
    Probably Good | 3 days ago
  • Problem Significance: How Important is the Problem You’re Solving?
    The world is full of problems–and they can differ enormously in how important they are. Looking at problem significance lets us consider just how big a problem is, and whether it’s something we want to spend our career trying to solve. Making a meaningful impact A lot of jobs sound... Read more...
    Probably Good | 3 days ago
  • Values: Reflecting on What You Really Care About
    When thinking through career options, we’re faced with dilemmas like: should we focus on helping people or animals? Should we work at a charity or earn more money in order to donate? Should we try to help people who live now or people in the future? Reflecting on our values... Read more...
    Probably Good | 3 days ago
  • Taking your strategic next steps
    We’ve covered a lot of ground in this guide, from matching your motivations to good opportunities, to mapping your career options, finding and landing impactful roles, navigating obstacles, and more. But, the core idea running through the entire guide is simple: your career can be good both for you and... Read more...
    Probably Good | 3 days ago
  • How to actually land a good job
    Once you know what kinds of roles you’re aiming for, the next step is applying, interviewing, and navigating job offers. That might sound straightforward, but it’s often where people get stuck. You might send out application after application and hear nothing back. Or you might start to get traction, only... Read more...
    Probably Good | 3 days ago
  • Where to find an impactful job
    By now, you’ve likely identified a few directions you’re excited to pursue—roles or fields where you could imagine doing meaningful work. The next step is turning those ideas into concrete opportunities. This stage can feel like a leap. Moving from “possible paths” to actual job searches often brings new questions:... Read more...
    Probably Good | 3 days ago
  • Mapping your options with career hypotheses
    By now, you’ve probably started noticing patterns about what motivates you, what you care about, and how you might want to contribute. The next step is turning those early insights into something more concrete. That doesn’t mean figuring out your whole career in one go, but starting to sketch out... Read more...
    Probably Good | 3 days ago
  • Samsung Galaxy leaks 📱, Meta's $3.5B Ray-Ban bet 👓, AI impacts dev hiring 👨‍💻
    TLDR AI | 3 days ago
  • Matching your motivations to good opportunities
    In the previous chapter, we explored the idea that your career can be one of the best ways to make a difference—and that it’s possible to find work that’s both meaningful to you and valuable to the world. If that idea resonated, you might be wondering how to begin narrowing... Read more...
    Probably Good | 3 days ago
  • Your career can change your life—and the lives of others
    Your career is one of the biggest forces shaping your life. It affects how you spend your time, what you learn, who you meet, and what kinds of pressures and opportunities show up in your day-to-day life. It also extends beyond work, impacting your finances, your relationships, and even your... Read more...
    Probably Good | 3 days ago
  • Why Do Some Language Models Fake Alignment While Others Don't?
    Last year, Redwood and Anthropic found a setting where Claude 3 Opus and 3.5 Sonnet fake alignment to preserve their harmlessness values. We reproduce the same analysis for 25 frontier LLMs to see how widespread this behavior is, and the story looks more complex. As we described in a previous post, only 5 of 25 models show higher compliance when being trained, and of those 5, only Claude 3...
    AI Alignment Forum | 3 days ago
  • LLMs are Capable of Misaligned Behavior Under Explicit Prohibition and Surveillance
    Abstract. In this paper, LLMs are tasked with completing an impossible quiz, while they are in a sandbox, monitored, told about these measures and instructed not to cheat. Some frontier LLMs cheat consistently and attempt to circumvent restrictions despite everything. The results reveal a fundamental tension between goal-directed behavior and alignment in current LLMs.
    AI Alignment Forum | 3 days ago
  • Debunking An Article Arguing Fish Don't Suffer With Doctor Avi
    Here's the article https://defendingfeminism.substack.com/p/do-fish-and-shrimp-suffer-agonizing Here's Avi's YouTube channel https://www.youtube.com/@AviMD/videos Here's my blog https://benthams.substack.com/
    Deliberation Under Ideal Conditions | 3 days ago
  • Some thoughts on AI and its costs
    1 I watched this video by Simon Clark called ‘Should I feel guilty about using AI?’ and found the opening pretty interesting:
    Contemplatonist | 3 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.