Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Target Malaria Uganda Joins World Malaria Day Commemoration in Iganga
    Target Malaria Uganda took part in the national commemoration of World Malaria Day in Iganga, held at Bulamagi Subcounty grounds under the theme: “Driven to End Malaria: Now We Can, Now We Must.” The event also combined the graduation of over 100 Community Health Extension Workers (CHEWs). The function, held on April 24, 2026, was […].
    Target Malaria | 2 hours ago
  • Ireland's AI Research Gap
    Why are we absent from frontier research?
    The Fitzwilliam | 3 hours ago
  • Contra Everyone On Taste
    Astral Codex Ten | 5 hours ago
  • Hidden Open Thread 432.5
    Astral Codex Ten | 5 hours ago
  • Effective Altruism focused on bednets while a malaria vaccine was stuck for 35 years. The case for Abundance.
    This post was cross-posted from Positive Sum by the Forum team. The author notes: I'm not saying every abundance goal meets this bar, e.g. high speed rail in America would not. This post is intended as a clarifying abundance's relation to EA, rather than a criticism of EA prioritization. Subtitle: Functional governance and democracy helps many EA cause areas.
    Effective Altruism Forum | 9 hours ago
  • Anthropic + xAI Colossus 🤝, Google Expert Advice 💬, design from the inside 🧑‍🎨
    TLDR AI | 11 hours ago
  • Many individual CEVs are probably quite bad
    I was thinking about Habryka's article on Putin's CEV, but I am posting my response here, because the original article is already 3 weeks old. I am not sure how exactly a person's CEV is defined.
    LessWrong | 13 hours ago
  • Considerations for PPE Strategy
    The Goal. The Coefficient Giving Biosecurity and Pandemic Preparedness team works to mitigate existential risks from engineered pathogens. A big part of why we're worried about this is that AI could make these risks much worse on short timelines. One motivating concern is that a misaligned AI or.
    Defenses in Depth | 14 hours ago
  • US Farm Bill alert, SE Asia incubator, and new global slaughter data
    Your farmed animal advocacy update for early May 2026
    Hive | 15 hours ago
  • I made a graphic of the expanding moral circle - free to use
    The "expanding moral circle" -- the idea that moral concern has (or, at least, should) widened over time from family, to community, to nation, to all humanity, and (arguably) outward to all sentient beings -- was developed by W.E.H. Lecky (1869) and popularized by Peter Singer in The Expanding Circle (1981).
    Effective Altruism Forum | 17 hours ago
  • x-risk-themed
    Sometimes, a friend who works around here, at an x-risk-themed organisation, will think about leaving their job. They’ll ask a group of people “what should I do instead?”. And everyone will chime in with ideas for other x-risk-themed orgs that they could join.
    LessWrong | 18 hours ago
  • Why I Find Woke Criticism of Veganism and Effective Altruism So Outrageous
    Using the language of the oppressed to justify ignoring their interests
    Bentham's Newsletter | 19 hours ago
  • Faunalytics Index – May 2026
    This month’s Faunalytics Index provides facts and stats about the welfare of egg-laying ducks in Indonesia, a program to help unhoused people and their companion animals, misperceptions about honeybees, and more. The post Faunalytics Index – May 2026 appeared first on Faunalytics.
    Faunalytics | 20 hours ago
  • Palantir’s controversy is the product
    Palantir’s fiery rhetoric helps mystify its mostly mundane tech — propping up its share price and preserving its national security contracts...
    Transformer | 20 hours ago
  • We might only get one real attempt at superintelligence
    Rational Animations | 20 hours ago
  • Future-Proofing EU AI Gigafactories: Four Design Imperatives
    The EU's AI Gigafactory initiative is its largest planned compute investment to date. Our new memo identifies four imperatives that the initiative must address to deliver on Europe's frontier AI ambitions. The post Future-Proofing EU AI Gigafactories: Four Design Imperatives appeared first on The Future Society.
    The Future Society | 20 hours ago
  • Surviving Mirror Life: A Manual for Resilience in Buildings: Introduction to the threat, concepts and scenario parameters
    Epistemic certainty: Obviously loads of uncertainty on mirror life risks and the degree to which we'd have to pressurize buildings or filter outdoor air. Moderately high certainty for the best hasty pathways for doing this in North American and a narrow subset of European buildings. Lower certainty as we move towards international buildings.
    Effective Altruism Forum | 20 hours ago
  • What if LLMs are mostly crystallized intelligence?
    Summary: LLMs are better at developing crystallized intelligence than fluid intelligence. That is: LLM training is good at building crystallized intelligence by learning patterns from training data, and this is sufficient to make them surprisingly skillful at lots of tasks.
    LessWrong | 20 hours ago
  • EA Forum Digest #290
    EA Forum Digest #290 Strategic AI debates, everyday impact, and what’s happening across EA Hello!. No news this week, enjoy the digest. — Toby (for the Forum team) We recommend: Open strategic questions for digital minds (Lucius Caviola, 15 min). AIM's new charity taxonomy (Aidan Alexander, Morgan Fairless, Ambitious Impact, 13 min).
    EA Forum Digest | 21 hours ago
  • AI Now is Hiring!
    We are at a pivotal moment in the fight to shape the future of AI and its role in society. AI Now is scaling up our team to meet the moment, looking to make three hires to help us grow the organization as we enter our next phase: More information on each role can be […]. The post AI Now is Hiring! appeared first on AI Now Institute.
    AI Now Institute | 21 hours ago
  • AI Now Is Hiring a Comms Associate
    We are looking for a high-touch, digitally savvy communications professional to support the organization’s external presence across a range of channels. The Communications Associate will be a primary point of contact for engagement with the public and press, working in close partnership with our Senior Director and wider team to execute our comms strategy. We […].
    AI Now Institute | 22 hours ago
  • AI Now Is Hiring a Senior Operations Director
    We’re looking for a senior leader to support the organization through this next phase of growth. Experienced and results-driven, this individual will have a finger to the pulse of the organization, working in close partnership with our Senior Director to build the systems and processes necessary for our team to thrive. This role requires a […].
    AI Now Institute | 22 hours ago
  • AI Now Is Hiring a Program Associate
    We’re looking for a Program Associate to help execute our programs so they can be maximally impactful. With a bias to action and high degree of attention to detail, this individual will work at the frontline of executing AI Now’s flagship reports and events, providing support to the Senior Director across the range of projects […].
    AI Now Institute | 22 hours ago
  • We grew ~10x last year, and are now planning for the next 10x
    Hey folks! We’ve recently done an internal impact assessment and thought it would be helpful to share its highlights. (Due to capacity constraints, we opted to share the current post rather than wait for a longer and more polished one, but we’re happy to answer questions.). For context, our goal at Probably Good is to help people build careers that are good for them and for the world.
    Effective Altruism Forum | 22 hours ago
  • Useful conversations & resources from our Slack community
    Hive Slack Threads: April
    Hive | 23 hours ago
  • The backlash to Billie Eilish’s vegan comments explains a lot about the American left (and everyone else)
    Last week, in a video interview with Elle magazine, the pop star Billie Eilish was asked the following question: “What’s one hill you’d die on?”  “Y’all not gonna like me for this one,” Eilish said. “Eating meat is inherently wrong.”  She then added that it’s hypocritical to say you love all animals but also eat […]...
    Future Perfect | 23 hours ago
  • With Me For My Looks
    what's the solution?
    Atoms vs Bits | 24 hours ago
  • How Cleaner Salt Production in Tanga Is Improving Nutrition Outcomes
    How Cleaner Salt Production in Tanga Is Improving Nutrition Outcomes dwaweru Wed, 05/06/2026 - 10:09 When you ask families in Tanga what salt means to them, the answer is often simple: “It’s something we cook with every day.” Yet few realise that the quality of that salt; its purity, safety, and level of iodization; directly affects the health of households, particularly children and pregnant...
    Global Alliance for Improved Nutrition | 1 days ago
  • The end of the fallout bunker
    In Germany, at least
    Existential Crunch | 1 days ago
  • Your rights when flying to Europe
    Europe (and the UK) have strong protections for flyers in the case of delayed or cancelled flights. However very few people are aware of these, and airlines will almost always try to wriggle out of paying up. Even travel agents are often unaware of these laws, or unwilling to fight the airline for you.
    LessWrong | 1 days ago
  • A draft honesty policy for credible communication with AI systems
    If humans and advanced AI systems are going to cooperate—to make honest deals and avoid negative-sum conflict—AIs will need reasons to trust us. By default, they won't have many: humans routinely lie to AIs in evaluations, and developers control much of what models see and believe. We share a sample honesty policy that AI companies could adopt.
    Forethought | 1 days ago
  • Rust in Numbers
    Why do manure spreaders have life cycles?.
    Asterisk | 1 days ago
  • iOS 3rd party AI 🤖, OpenAI phone 2027 📱, compounding AI work 📈
    TLDR AI | 1 days ago
  • Model Spec Midtraining: Improving How Alignment Training Generalizes
    tl;dr We introduce model spec midtraining (MSM): after pre-training but before alignment fine-tuning, we train models on synthetic documents discussing their Model Spec, teaching them how they should behave and why. This controls how models generalize from subsequent alignment training—for example, two models with identical fine-tuning can generalize to different values depending on how MSM...
    LessWrong | 1 days ago
  • Is AI really a bubble?
    "A technology can be a bubble and still be real. The dot-com bubble was a bubble. The internet was real.". In 2021, experts predicted AI would win a Math Olympiad gold medal in 22 years. It happened last year. A few weeks ago, GPT 5.2 published a novel result in physics. Now the AI companies are openly working on AIs that build smarter AIs that build smarter AIs.
    Machine Intelligence Research Institute | 1 days ago
  • May is Healthy Vision Month. May 10 is Mother’s Day. This is what it looks like to protect a child’s future.
    If you have children, or have ever been around a one-year-old, you know they are into everything. It is the age of eager discovery; of reaching, crawling, and finally finding your feet. Sahil is no different. He has that same drive to explore, but for the first year of his life, he just couldn’t see …. The post May is Healthy Vision Month. May 10 is Mother’s Day.
    Seva Foundation | 2 days ago
  • A Pro-Supply To-Do List for Congress’s Housing Bill
    America can’t afford a lowest-common-denominator housing supply bill
    Institute for Progress | 2 days ago
  • RIP Classic Reasoning Benchmarks. What’s Next?
    Give up at least one of: text only, short time horizon, easy to grade, and expert human superiority.
    Epoch Newsletter | 2 days ago
  • What holds AI safety together? Co-authorship networks from 200 papers
    We (social science PhD students) computed co-authorship networks based on a corpus of 200 AI safety papers covering 2015-2025, and we’d like your help checking if the underlying dataset is right. Co-authorship networks make visible the relative prominence of entities involved in AI safety research, and trace relationships between them.
    LessWrong | 2 days ago
  • Let Kids Keep More Productivity Gains
    While I was traveling Julia asked me: why is Anna saying her fiddle practice is only two minutes? In this case, two minutes was the right amount of time! . Anna (10y) and I had been fighting a lot about practice. She'd complain, slump, stop repeatedly to make adjustments, and generally be miserable.
    LessWrong | 2 days ago
  • Does your AI perform badly because you — you, specifically — are a bad person
    Claude really got me lately. I’d given it an elaborate prompt in an attempt to summon an AGI-level answer to my third-grade level question. Embarrassingly, it included the phrase, “this work might be reviewed by probability theorists, who are very pedantic”. Claude didn’t miss a beat.
    LessWrong | 2 days ago
  • AI risk was not invented by AI CEOs to hype their companies
    I hear that many people believe that the idea of advanced AI threatening human existence was invented by AI CEOs to hype their products. I’ve even been condescendingly informed of this, as if I am the one at risk of naively accepting AI companies’ preferred narratives. If you are reading this, you are probably familiar enough with the decades-old AI safety community to know this isn’t true.
    LessWrong | 2 days ago
  • [Linkpost] Interpreting Language Model Parameters
    This is the latest work in our Parameter Decomposition agenda. We introduce a new parameter decomposition method, adVersarial Parameter Decomposition (VPD) and decompose the parameters of a small language model with it. VPD greatly improves on our previous techniques, Stochastic Parameter Decomposition (SPD) and Attribution-based Parameter Decomposition (APD).
    LessWrong | 2 days ago
  • [Linkpost] Interpreting Language Model Parameters
    This is the latest work in our Parameter Decomposition agenda. We introduce a new parameter decomposition method, adVersarial Parameter Decomposition (VPD) and decompose the parameters of a small language model with it. VPD greatly improves on our previous techniques, Stochastic Parameter Decomposition (SPD) and Attribution-based Parameter Decomposition (APD).
    AI Alignment Forum | 2 days ago
  • New Book — Compassionate Purpose: Personal Inspiration for a Better World
    “How are we to live, in a world in which there is so much unnecessary suffering? Magnus Vinding looks unflinchingly at that question, and gives an answer that is realistic, and yet inspiring. Read this book. It may change your life.”. — Peter Singer, author of Animal Liberation. I have just published a book:
    Effective Altruism Forum | 2 days ago
  • Motivated reasoning, confirmation bias, and AI risk theory
    Of the fifty-odd biases discovered by Kahneman, Tversky, and their successors, forty-nine are cute quirks, and one is destroying civilization. This last one is confirmation bias. - From Scott Alexander's review of Julia Galef's The Scout Mindset.
    AI Alignment Forum | 2 days ago
  • What’s new in biology: May 2026
    A cure for congenital deafness, recreating snake venom, antibodies, a legend in cardiovascular medicine, and a successful hair loss treatment?
    The Works in Progress Newsletter | 2 days ago
  • The Best Argument Against Deontology Is About Suitcases
    Pack it up deontologists!
    Bentham's Newsletter | 2 days ago
  • What Tourists Will (And Won’t) Pay For Whale Watching
    A study of blue whale watchers in Mexico finds that boat crowding more than whale numbers shapes what tourists are willing to pay, with implications for animal welfare, local economies, and conservation. The post What Tourists Will (And Won’t) Pay For Whale Watching appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Business Operations Associate / Specialist
    The post Business Operations Associate / Specialist appeared first on 80,000 Hours.
    80,000 Hours | 2 days ago
  • People Operations Associate / Specialist
    The post People Operations Associate / Specialist appeared first on 80,000 Hours.
    80,000 Hours | 2 days ago
  • Recruiting Associate / Specialist
    The post Recruiting Associate / Specialist appeared first on 80,000 Hours.
    80,000 Hours | 2 days ago
  • A Guidebook on “Whole-of-State Cybersecurity”
    A new guidebook released by CLTC’s Public Interest Cybersecurity Program highlights the benefits of “community cyber defense programs” — including cybersecurity clinics, regional security operation centers (RSOCs), and state cyber corps — as a resource for defending organizations like nonprofits, rural hospitals, schools, local utilities, counties, municipalities, and small businesses from...
    Center for Long-Term Cybersecurity | 2 days ago
  • Zach Robinson | Effective Altruism Stories
    “Why is it that we set lines in these seemingly arbitrary places based on what people believe or what they look like or how close they are to us?". Zach Robinson couldn't answer that question growing up in Omaha, where he had a different race, religion, and sexual orientation than most of his community.
    Centre for Effective Altruism | 2 days ago
  • Your Grants Are the Floor, Not the Ceiling
    The more I have been working with grantmakers the more I have come to value the impact they can have outside of their direct grantmaking.
    Measured Life | 2 days ago
  • Would You Like To Give Me Money?
    if so, what for?
    Atoms vs Bits | 2 days ago
  • Compassionate Purpose: Personal Inspiration for a Better World
    “Read this book. It may change your life.”— Peter Singer, author of Animal Liberation What if the point of self-improvement were not just to feel better or get ahead, but to become more capable of helping in a hurting world? In Compassionate Purpose, Magnus Vinding bridges self-help and ethics with a framework for personal development... Continue Reading →...
    Magnus Vinding | 2 days ago
  • Krystal Birungi is awarded the Global Citizen Prize
    It is with great pleasure that I announce that my colleague Krystal Birungi of Target Malaria Uganda at the Uganda Virus Research Institute has been awarded the 2026 Global Citizen Prize. The Global Citizen Prize seeks to identify and celebrate grassroots activists in local communities who are fighting for social justice, championing the most vulnerable, […].
    Target Malaria | 2 days ago
  • Open position: Advisor
    The post Open position: Advisor appeared first on 80,000 Hours.
    80,000 Hours | 2 days ago
  • How much electricity does AI consume? [2025 summary]
    What share of electricity is consumed by data centres? What's the energy footprint of ChatGPT and other chatbots?
    Sustainability by Numbers | 2 days ago
  • Post-Inkhaven Post
    (Skip if you aren’t interested in Inkhaven chatter). Watteau, The Embarkation for Cythera (1717) Inkhaven is over. I am sad. Surprisingly: I didn’t actually find hitting the writing that hard. Lots of people, maybe the majority of the cohort, were working right up to midnight, and didn’t have and extra posts in the bank for days when they didn’t want to write. It’s quite possible that...
    Henry Stanley | 2 days ago
  • OpenAI exec finances 💰, Amazon supply chain services 🚚, Redis arrays 🧑‍💻
    TLDR AI | 2 days ago
  • Goodmaxxing
    (Crosspost). If you’re young and online, you’re probably maxxing something. Maybe you’re looksmaxxing: trying to maximize your hotness (e.g. by hitting yourself in the face with a hammer). Maybe, like Clavicular, you do it just to mog other people—to look better than they do. But good looks reach diminishing marginal returns.
    Effective Altruism Forum | 2 days ago
  • May is Healthy Vision Month. This is how sight united three generations of women.
    In Battambang, Cambodia, three generations of women run a family car wash. It’s a life built on grit, love, and long days, but for 20 years, there was a missing piece at the center of their home. At 74, Phen Mao lived in a blur. After she lost her sight, her daughter, Lorb, carried the …. The post May is Healthy Vision Month. This is how sight united three generations of women.
    Seva Foundation | 2 days ago
  • It's nice of you to worry about me, but I really do have a life
    I have two shameful secrets that I probably shouldn't talk about online: I love my family. I enjoy my hobbies. "What an idiot!" you probably think. "Doesn't he realize that at his next job interview, HR will probably use an AI that can match his online writing based on a short sample of written text, and when they ask 'hey AI, is this guy really 100% devoted to his job, and does he spend...
    LessWrong | 3 days ago
  • Irretrievability; or, Murphy's Curse of Oneshotness upon ASI
    Example 1: The Viking 1 lander. In the 1970s, NASA sent a pair of probes to Mars, the Viking 1 and Viking 2 missions. Total cost of $1B (1970), equivalent to about $7B (2025). The Viking 1 probe operated on Mars's surface for six years, before its battery began to seriously degrade. One might have thought a battery problem like that would spell the irrevocable end of the mission.
    LessWrong | 3 days ago
  • AIM's new charity taxonomy
    0. I don't work at AIM. why care about this?. This taxonomy is written from AIM's perspective, but it may be helpful more broadly: If you're starting a new charity, incubating others, or doing charity idea research: The taxonomy gives you a structured way to think about which ideas to pursue, what founder profile fits, and what research and support each idea needs.
    Effective Altruism Forum | 3 days ago
  • 🟡 Iran says it will target US naval vessels, UAE leaves OPEC, GPT-5.5 similar to Mythos on cyber tasks || Global Risks Weekly Roundup #18/2026
    Executive summary
    Sentinel | 3 days ago
  • “If You Do One Thing for Animals This Year, Do This” by Becca Rogers
    There is a short window to prevent a US bill that would overturn decades of animal welfare progress. This is arguably the most consequential piece of farm animal legislation in U.S. history. Summary. The Farm Bill currently being considered by the U.S. Congress includes the “Save Our Bacon Act”, which would eliminate states' abilities to set standards on how farmed animals are raised and...
    Effective Altruism Forum Podcast | 3 days ago
  • AI Industrial Takeoff — Part 1: Maximum growth rates with current technology
    How fast could an AI-driven economy grow? Most economists expect a few percentage points at best, comparable to previous general-purpose technologies (Acemoglu (2024)). Those closer to AI development tend to imagine something much more radical (Shulman (2023); Davidson and Hadshar (2025)). This series aims to ground growth rates in how physical production works.
    LessWrong | 3 days ago
  • Goodmaxxing
    A letter to Gen Alpha
    Bentham's Newsletter | 3 days ago
  • AI Industrial Takeoff — Part 1: Maximum growth rates with current technology
    How fast can an AI-driven economy grow? Economists expect a few percentage points; at best those closer to AI development imagine Dyson spheres within years. Who is correct?
    Defenses in Depth | 3 days ago
  • How the Radical Fund Sustained Radical Imagination
    Editors’ Note: Carmen Rojas continues HistPhil’s book forum on John Witt’s The Radical Fund (Simon & Schuster, 2025). John Fabian Witt’s The Radical Fund: How a Band of Visionaries and a Million Dollars Upended America is one of the best books I’ve read about the perils and promises of philanthropy in the United States. It … Continue reading →...
    HistPhil | 3 days ago
  • Taking woo seriously but not literally
    I think that a lot of “woo” - a broad term that includes things like chakras, energy healing, Tarot, various Eastern religions and neopagan practices, etc. - consists of things that have real effects and uses, even if many (though not all) of their practitioners are mistaken about the exact mechanisms and make unwarranted metaphysical claims. Now, a woo practitioner might explain what’s...
    LessWrong | 3 days ago
  • Open Thread 432
    Astral Codex Ten | 3 days ago
  • Import AI 455: AI systems are about to start building themselves.
    The first step towards recursive self improvement
    Import AI | 3 days ago
  • Linkpost for May
    Effective Altruism
    Thing of Things | 3 days ago
  • Van Halen Of The Heart
    just show you care
    Atoms vs Bits | 3 days ago
  • ChinAI #357: AI Surveillance in Chinese Universities
    Greetings from a world where…...
    ChinAI Newsletter | 3 days ago
  • Podcast | How will falling fertility rates hurt the economy? With Melissa Kearney
    Podcast | How will falling fertility rates hurt the economy? With Melissa Kearney Typically, a society’s population remains stable if women have about 2.1 children each. By that metric, the word has a big problem. In developed countries the total fertility rate is well below that figure. So what are the economic consequences of that shortfall?
    J-PAL | 3 days ago
  • Can putting a price tag on ending poverty unlock billions in giving?
    Can putting a price tag on ending poverty unlock billions in giving? New research from J-PAL affiliate Paul Niehaus, cofounder of GiveDirectly, reveals ending extreme poverty may be more achievable than many assume. The question now is whether that kind of clarity can mobilize philanthropic money sitting on the sidelines... . spriyabalasubr… Mon, 05/04/2026 - 07:26...
    J-PAL | 3 days ago
  • Podcast | Boosting farmers' profits
    Podcast | Boosting farmers' profits In this episode of VoxDevTalks, Craig McIntosh discusses a recent J-PAL policy insight that takes stock of the evidence from randomised controlled trials on credit, subsidies, and cash transfers for smallholder farmers, arriving at conclusions that challenge some of agriculture's most persistent development assumptions. spriyabalasubr… Mon, 05/04/2026 -...
    J-PAL | 3 days ago
  • The EU AI Act Newsletter #101: Trilogue Breakdown
    Talks on delaying the AI Act collapse over industrial AI, Merz diverges from his coalition partner, and Parliament invites Anthropic to a hearing on the Mythos model.
    The EU AI Act Newsletter | 3 days ago
  • Wat is een walvis waard en wat is het gewicht van een garnaal op de morele weegschaal?
    Opiniestuk in De Standaard (04-05-2026) Eén geredde bultrug toont hoe onbetrouwbaar onze morele radar is Het was ontroerend mooi om te zien: bultrugwalvis Timmy die na een reddingsoperatie terug vrij rondzwom in de open zee, nadat hij eind maart in … Lees verder →...
    The Rational Ethicist | 3 days ago
  • Thoughts on investing for transformative AI
    TLDR: I basically don’t. Contents. Contents. Ethical concerns. Thoughts on how to avoid becoming corrupted. Future worlds What happens in the lead-up to ASI?. Predictions are hard, especially about markets. Trend-following. The EA portfolio. Leaning my investments in the right direction. Appendix: Some specific predictions. Notes. Ethical concerns.
    Philosophical Multicore | 3 days ago
  • What I learned from making a fire
    One time my friends and I made a fire on the beach.
    Hauke’s Blog | 3 days ago
  • Meta humanoid robots 🤖, SpaceX costs leak 💰, open design 🧑‍🎨
    TLDR AI | 3 days ago
  • Dairy cows make their misery expensive (but their calves can’t)
    How much do cows suffer in the production of milk? I can’t answer that; understanding animal experience is hard. But I can at least provide some facts about the conditions dairy cows live in, which might be useful to you in making your own assessment. My biggest conclusion is that cows made better choices than chickens by making their misery financially costly to farmers. Life Cycle.
    LessWrong | 4 days ago
  • Exploration Hacking: Can LLMs Learn to Resist RL Training?
    We empirically investigate exploration hacking (EH) — where models strategically alter their exploration to resist RL training — by creating model organisms that resist capability elicitation, evaluating countermeasures, and auditing frontier models for their propensity.
    AI Alignment Forum | 4 days ago
  • Explicit Racial Discrimination in College Debate
    Plus other madness
    Bentham's Newsletter | 4 days ago
  • Word-learner
    Words, words were truly alive on the tongue, in the head
    Atoms vs Bits | 4 days ago
  • Are the last 3 months the start of an AI acceleration?
    Most public commentary is debating whether AI has hit a plateau.
    Benjamin Todd | 4 days ago
  • The better algorithms of our nature
    Engagement, bridging, and the design of digital platforms which don't pander to our weaknesses.
    Reasonable People | 4 days ago
  • AI, Fiction, Literature: A Scenario
    Soon, if not already, established authors of mass-market fiction will publish AI-assisted writing.
    Raising Dust | 4 days ago
  • What’s more likely to be sentient: an ant or ChatGPT?
    Sentience is hot these days. Partly because of the development of impressive new AI systems, everyone seems to be asking: How do we know if something is sentient? While consciousness means simply having a subjective point of view on the world — a feeling of what it’s like to be you — sentience is the […]...
    Future Perfect | 4 days ago
  • Measuring the ability of Opus 4.5 to fool narrow classifiers
    We measure the ability of Opus 4.5 to fool prompted or fine-tuned classifiers trying to detect a narrow set of outcomes. We find that: The Opus 4.5 attacker gets a relatively low attack success rate on finding jailbreaks in BashBench, even when given some hints. Performance is especially low against a prompt Opus 4.5 classifier with a CoT and a fine-tuned Haiku 4.5 classifier.
    LessWrong | 4 days ago
  • Notes on equanimity from the inside
    I've always thought of myself as even-keeled and equanimous; that my mind is still. In hindsight, I had no idea what I was talking about. Halfway through my second ten-day meditation retreat, I experienced a depth of equanimity that broke my existing frame of reference. It’s hard to convey in words.
    Effective Altruism Forum | 4 days ago
  • A new rationalist self-improvement book: the 12 Levers
    I'm publishing a book that I think can fairly be described as a rationalist approach to self-improvement. Whereas many self-help books focus mainly on stories and what worked well for the author, our book takes a very different approach.
    LessWrong | 4 days ago
  • OpenAI's red line for AI self-improvement is fundamentally flawed
    TL;DR. OpenAI's "Critical" threshold for AI self-improvement in the Preparedness Framework v2 has three structural problems: It fires too late. The lagging indicator, 5× generational acceleration sustained for several months, lets ~3 years of effective progress accumulate before triggering. Anthropic used a 2x threshold instead of a 5x. It's self-certified.
    LessWrong | 4 days ago
  • A new rationalist self-improvement book: the 12 Levers
    I'm publishing a book that I think can fairly be described as a rationalist/evidence oriented approach to self-improvement. Whereas many self-help books focus mainly on stories and what worked well for the author, our book takes a very different approach.
    Effective Altruism Forum | 4 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.