Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • AI Security Initiative Seeking Fall 2025 Graduate Student Researcher
    Overview The UC Berkeley Center for Long-Term Cybersecurity (CLTC) invites applications for a Graduate Student Researcher (GSR) position to work within CLTC’s AI Security Initiative for the Fall…. The post AI Security Initiative Seeking Fall 2025 Graduate Student Researcher appeared first on CLTC.
    Center for Long-Term Cybersecurity | 5 hours ago
  • Podcast Episode 6: Forecasting the Future of Global Health Funding
    In the face of potential major cuts to foreign aid, how can we anticipate the impact on global health and effectively direct resources to the areas of greatest need?. In this episode, GiveWell’s CEO and co-founder, Elie Hassenfeld, speaks with Principal Researcher Alex Cohen to discuss the forecasting work GiveWell has undertaken to better understand what the future of global health funding...
    GiveWell | 8 hours ago
  • The need to relativise in debate
    Summary: This post highlights the need for results in AI safety, such as debate or scalable oversight, to 'relativise', i.e. for the result to hold even when all parties are given access to a black box 'oracle' (the oracle might be a powerful problem solver, a random function, or a model of arbitrary human preferences).
    AI Alignment Forum | 8 hours ago
  • Who Are the Least Accountable Actors in the NGO World?
    and what we can do about it
    Measured Life | 10 hours ago
  • Is it even possible to convince people to stop eating meat?
    Factory farming is a particularly wicked problem to solve. It’s a moral atrocity, involving the confinement and slaughter of hundreds of billions of animals globally each year. It’s a blight on the environment. It’s terrible for slaughterhouse workers, many of whom suffer from PTSD, anxiety, or depression. Yet factory farming produces something almost everyone wants […]...
    Future Perfect | 12 hours ago
  • Godfather of Synthetic Bio on De-Aging, De-Extinction, & Weaponized Mirror Life — George Church
    The man behind the last few decades of biotech breakthroughs
    The Lunar Society | 12 hours ago
  • Dead Kids Matter More Than Navigating Bureaucratic Hurdles Concerning Gay Flags
    Recounting a frustrating anecdote (Now unpaywalled)
    Bentham's Newsletter | 13 hours ago
  • Africa Unites to Take Stock of Disease Burden and Financial Needs towards NTDs Elimination by 2030
    Addis Ababa, Ethiopia | June 26, 2025 — Fifty African Union Member States have endorsed a ground-breaking digital micro-planning portal co-created by Africa CDC to accelerate the elimination of Neglected Tropical Diseases — a diverse group of infectious diseases that primarily affect impoverished communities in tropical and subtropical areas.
    The END Fund | 13 hours ago
  • Mapping Global Animal Advocacy Spending
    Drawing on survey data from more than 200 animal advocacy organizations worldwide, this report maps how these groups are spending to transform the global food system. The post Mapping Global Animal Advocacy Spending appeared first on Faunalytics.
    Faunalytics | 13 hours ago
  • The EU AI Act Newsletter #80: Commission Seeks Experts for AI Scientific Panel
    The European Commission is establishing a scientific panel of independent experts to aid in implementing and enforcing the AI Act.
    The EU AI Act Newsletter | 14 hours ago
  • Bertrand Russell: What I Believe (1925)
    My notes on a short book in which Russell presents his worldview and employs some fine phases and a peculiar metaethics. The post Bertrand Russell: What I Believe (1925) appeared first on James Aitchison.
    Philosophy and Ideas | 15 hours ago
  • Response to Revisions 2 & 3 of the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance
    After further rounds of consultations with member states, the co-facilitators from Spain and Costa Rica have published Revision 3 for the…
    Simon Institute for Longterm Governance | 16 hours ago
  • Engaging the community on Bubeke Island
    From May 26th to 30th, 2025, the Target Malaria Uganda Stakeholder Engagement team at Uganda Virus Research Institute conducted a week-long engagement on Bubeke Island on Lake Victoria in Kalangala district. This field activity attracted community leaders, opinion leaders, sub-county officials, youth, women’s groups, and health workers to share updates, clarify concerns, and enhance project […].
    Target Malaria | 19 hours ago
  • The first non-opioid painkiller
    Journavx was approved this year. Why did it take so long to develop?
    The Works in Progress Newsletter | 19 hours ago
  • How I Found Hope in Effective Giving
    I used to feel powerless against the world’s problems. Then I discovered effective giving — and realised even a small donation can have a huge impact. Now I give knowing it works. Link in bio.
    Giving What We Can | 20 hours ago
  • Missing Heritability: Much More Than You Wanted To Know
    Astral Codex Ten | 20 hours ago
  • The Practical Value of Flawed Models: A Response to titotal’s AI 2027 Critique
    Crossposted from my Substack. @titotal recently posted an in-depth critique of AI 2027. I'm a fan of his work, and this post was, as expected, phenomenal*. Much of the critique targets the unjustified weirdness of the superexponential time horizon growth curve that underpins the AI 2027 forecast.
    Effective Altruism Forum | 1 days ago
  • What we achieved together — $80M donated in 2024 + 6x giving multiplier
    Giving What We Can | 1 days ago
  • ¿Es posible combatir la malaria en regiones inundadas?
    Ayuda Efectiva | 1 days ago
  • ¿Es posible combatir la malaria en regiones inundadas? | Nuevo umbral de pobreza extrema: 70 euros al mes
    Ayuda Efectiva | 1 days ago
  • Claude apps 🤖, Gemini CLI 👨‍💻, DeepMind tackles Navier Stokes 🌊
    TLDR AI | 1 days ago
  • Reducing suffering given long-term cluelessness
    An objection against trying to reduce suffering is that we cannot predict whether our actions will reduce or increase suffering in the long term. Relatedly, some have argued that we are clueless about the effects that any realistic action would have on total welfare, and this cluelessness, it has been claimed, undermines our reason to... Continue Reading →...
    Magnus Vinding | 1 days ago
  • Summary of John Halstead's Book-Length Report on Existential Risks From Climate Change
    1 Introduction. (Crossposted from my blog--formatting is better there). Very large numbers of people seem to think that climate change is likely to end the world. Biden and Harris both called it an existential threat.
    Effective Altruism Forum | 1 days ago
  • Defining Corrigible and Useful Goals
    This post contains similar content to a forthcoming paper, in a framing more directly addressed to readers already interested in and informed about alignment. I include some less formal thoughts, and cut some technical details.
    AI Alignment Forum | 1 days ago
  • New Paper: Ambiguous Online Learning
    Abstract: We propose a new variant of online learning that we call "ambiguous online learning". In this setting, the learner is allowed to produce multiple predicted labels. Such an "ambiguous prediction" is considered correct when at least one of the labels is correct, and none of the labels are "predictably wrong".
    AI Alignment Forum | 1 days ago
  • Deadline time for cage-free commitments, skills for impactful grantmaking, join AI–animal welfare contractor pool
    Your farmed animal advocacy update for late June 2025
    Hive | 1 days ago
  • How we get to an abundant future
    A perspective from Eli Dourado, Astera’s Head of Strategic Investments...
    Human Readable | 1 days ago
  • A History of Global Development
    Summary: I've recently completed a 7 week series on global development, I thought it would be useful to share each week on the forum as well in case people missed the announcement post. The aim is to give people a more comprehensive overview of the global development landscape, either for those considering working/donating in this area, or if you are already in development but want to...
    Effective Altruism Forum | 1 days ago
  • Cross-Movement Collaboration For Farmed Animal Advocates In Southeast Asia
    The goals of animal advocacy organizations have the potential to benefit people and the environment, leading many to believe that increased cooperation between social movements may increase their impact. This study explores social movements in Southeast Asia, offering insight to help advocates there make collaboration easier and more effective.
    Faunalytics | 2 days ago
  • What The Most Detailed Report Ever Compiled On Existential Risks From Climate Change Found
    Climate change is not an existential risk!
    Bentham's Newsletter | 2 days ago
  • 37 things people love about Sasha Chapin
    A communal love letter to my husband
    Useful Fictions | 2 days ago
  • The Scale Of The Global Trade In Live Farmed Animals
    Over 1.5 billion farmed animals are traded live each year. This study maps trade routes and highlights the major risks to animals, people, and the planet. The post The Scale Of The Global Trade In Live Farmed Animals appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Why we ignore the suffering of wild animals: Understanding our biases
    A hidden crisis. Literally, quintillions 1 of animals are suffering and dying right now in the wild, due to disease, hunger, thirst, excessive heat or cold, and other factors. Yet, most people—including those who express concern for animals—fail to give importance to this issue. Why?.
    Effective Altruism Forum | 2 days ago
  • ATVBT Approves of Approval Voting
    you get approval, and you get approval
    Atoms vs Bits | 2 days ago
  • July 2 deadline: Influence US tech policy | Job Blast 🚀
    July 2 deadline: Influence US tech policy | Job Blast 🚀 TechCongress Fellowship: $73.5K stipend, no gov experience required, apply by July 2 ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏...
    EACN Newsletter | 2 days ago
  • EA Forum Digest #246
    EA Forum Digest #246 Hello!. Reputable substacker Bentham’s Bulldog has set up a debate event on the Forum, discussing whether morality is objective. Currently the vote is split — maybe you can sway them. Also, CEA is hiring for a director of EA Funds, deadline June 29th. — Toby (for the Forum team). We recommend:
    EA Forum Digest | 2 days ago
  • Preparing for the intelligence explosion | Will MacAskill | EAG London: 2025
    Will MacAskill discusses his recent work on preparing for the intelligence explosion and on focusing on trying to reach near-best futures rather than (merely) trying to prevent catastrophe. Will MacAskill is a Senior Research Fellow at Forethought.
    Centre for Effective Altruism | 2 days ago
  • The data that shapes global health | Saloni Dattani | EAG London: 2025
    Saloni Dattani makes the case that data collection isn’t just an academic concern; it’s crucial for policymaking, journalism, and industry, and, in some cases, a matter of life or death. Data collection has changed our understanding of diseases like cholera, tuberculosis and snakebite envenoming; and has been crucial in tracking progress over time and identifying emerging problems.
    Centre for Effective Altruism | 2 days ago
  • Forecasting can get easier over longer timeframes | Toby Ord | EAG London: 2025
    Most people believe forecasting gets systematically harder over longer time periods — that it is hard to see further through the mists of time. But there are many forecasting questions for which the reverse is true. In this talk, Toby Ord shows how this can be, teasing out a variety of different mechanisms which can make prediction about the further future easier.
    Centre for Effective Altruism | 2 days ago
  • Reasons for optimism in international development | Rachel Glennerster | EAG London: 2025
    Development economist Rachel Glennerster discusses reasons for optimism in global development despite recent aid cuts and declining international cooperation. She explores how innovation drives human progress and examines advanced market commitments as tools for promoting innovation where social and private benefits are misaligned.
    Centre for Effective Altruism | 2 days ago
  • Transitioning your CEO without damaging your organization | Joey Savoie | EAG London: 2025
    Leadership transitions are pivotal but inevitable moments in an organization's life cycle that can either strengthen or destabilize it—and how it will go is often decided by actions taken months, if not years in advance.
    Centre for Effective Altruism | 2 days ago
  • Innovation, economics, and effective altruism | Michael Kremer | EAG London: 2025
    Innovation is the key driver of growth and increases in human wellbeing. Investing in innovation can be extremely cost-effective, because the costs of developing and testing new innovations are relatively low, but they can have huge impacts if scaled up by governments or firms to reach millions of people.
    Centre for Effective Altruism | 2 days ago
  • ChatGPT and OCD are a dangerous combo
    Millions of people use ChatGPT for help with daily tasks, but for a subset of users, a chatbot can be more of a hindrance than a help. Some people with obsessive compulsive disorder (OCD) are finding this out the hard way. On online forums and in their therapists’ offices, they report turning to ChatGPT with […]...
    Future Perfect | 2 days ago
  • Useful prompts for getting music recommendations from AI models
    Today’s best AI “large language models” are especially good at tasks where hallucinations and other failures are low-stakes, such as making music recommendations. Here is continuously-updated doc of music search prompts I’ve found helpful, a few examples below: Which compositions by [composer] are most popular and/or highly regarded today? Please list 20 of their compositions […]...
    Luke Muehlhauser | 2 days ago
  • Why we ignore the suffering of wild animals: Understanding our biases
    A hidden crisis. Literally, quintillions 1 of animals are suffering and dying right now in the wild, due to disease, hunger, thirst, excessive heat or cold, and other factors. Yet, most people—including those who express concern for animals—fail to give importance to this issue. Why?.
    Animal Advocacy Forum | 2 days ago
  • The Moral Gadfly's Double-Bind
    Warranted moral criticism is rarely welcomed
    Good Thoughts | 2 days ago
  • Acknowledged but not engaged: people with intellectual disabilities continue to be left behind
    Decision-making and politics are becoming more inclusive of people with disabilities. However, people with intellectual disabilities still face barriers.
    Sightsavers | 2 days ago
  • 2024 Annual Report | The Future of Leadership
    The post 2024 Annual Report | The Future of Leadership appeared first on The END Fund.
    The END Fund | 2 days ago
  • Wakiso District Supervisors Undergo Refresher Training on eCHIS and DHIS2
    The post Wakiso District Supervisors Undergo Refresher Training on eCHIS and DHIS2 appeared first on Living Goods.
    Living Goods | 2 days ago
  • The Need to Optimize EA Content to Show Up in AI Chats
    As more people turn to AI assistants like ChatGPT and Perplexity to answer their questions, an opportunity to go beyond traditional communication efforts like SEO arises. Answer Engine Optimization (AEO) refers to the practice of structuring content so it can be accurately retrieved and cited by large language models (LLMs).
    Effective Altruism Forum | 2 days ago
  • Ave Imperator, morituri te salutant: a review of Skibidi Toilet
    Art has died and been reborn a thousand times now. Join me at its graveside once again. Let us speak a few words for what once was. Let us imagine the inconceivable and hollow future ahead without it. If you weep, I will pass you my handkerchief. And let us all pretend to be surprised once more when it bursts out of its coffin, on fire, and singing.
    Eukaryote Writes Blog | 2 days ago
  • Amazon + Anthropic datacenter ⚡, OpenAI's Office competitor 📝, AI tops HackerOne 👨‍💻
    TLDR AI | 2 days ago
  • Nuevo umbral de pobreza extrema: 800 millones de personas viven con menos de 70 euros al mes
    Ayuda Efectiva | 2 days ago
  • California YIMBY Statement on AB 609: CEQA infill housing exemption
    California YIMBY issued the following statement on news that AB 609, creating CEQA exemptions for infill housing, would be signed into law as part of the state budget (through AB 130, a trailer bill): “This is one of the biggest…. The post California YIMBY Statement on AB 609: CEQA infill <span class="dewidow">housing exemption</span> appeared first on California YIMBY.
    California YIMBY | 2 days ago
  • Machines of Faithful Obedience
    [Crossposted on LessWrong] Throughout history, technological and scientific advances have had both good and ill effects, but their overall impact has been overwhelmingly positive. Thanks to scientific progress, most people on earth live longer, healthier, and better than they did centuries or even decades ago.
    Windows On Theory | 2 days ago
  • Manifold, the "NYC of Social Prediction Markets"
    Zohran vs Cuomo, Khamenei vs Netanyahu, Haliburton vs Gilgeous-Alexander
    Manifold Markets | 2 days ago
  • Systems Change 101
    In alphabetical order: Ben Smith, @Karen Singleton, @Lin BL, Rebecca Zanini, @Swan 🔸, @Ulf Graf 🔹. Acknowledgements: Rosanna Zimdahl, Ruben Dieleman, Samuel Hilton and Simon Holm. Preface/TL;DR. We believe systems thinking offers valuable tools for identifying more impactful interventions. This post aims to help you:
    Effective Altruism Forum | 2 days ago
  • Why "training against scheming" is hard
    TLDR: I think that AI developers will and should attempt to reduce scheming by explicitly training the model to scheme less. However, I think “training against scheming” is a harder problem than other alignment training (e.g. harmlessness training).
    AI Alignment Forum | 2 days ago
  • What does 10x-ing effective compute get you?
    Once AIs match top humans, what are the returns to further scaling and algorithmic improvement?
    Redwood Research | 2 days ago
  • What can be learned from scary demos? A snitching case study
    This is a hackathon project write-up. This does not represent the views of my employer. Thanks to Tianyi Alex Qiu and Aidan Ewart for helpful discussions. Sometimes, people will search for and find a situation where an LLM would do something bad, and then argue that the situation was natural enough that it is concerning. When is this a valid reason to be concerned?
    AI Alignment Forum | 2 days ago
  • How not to lose your job to AI
    The skills AI will make more valuable (and how to learn them)
    Benjamin Todd | 2 days ago
  • Morality is Objective
    There is dispute among EAs--and the general public more broadly--about whether morality is objective. So I thought I'd kick off a debate about this, and try to draw more people into reading and posting on the forum! Here is my opening volley in the debate, and I encourage others to respond. Unlike a lot of effective altruists and people in my segment of the internet, I am a moral realist.
    Effective Altruism Forum | 2 days ago
  • Reasons AGI might take decades, progress on cage-free eggs, lead exposure and much more.
    Reasons AGI might take decades, progress on cage-free eggs, lead exposure and much more. View this email in your browser Hello! Our favourite links this month include: A blog post from 80,000 Hours, outlining reasons that we may still be waiting decades for AGI.
    Effective Altruism Newsletter | 2 days ago
  • The Effective Altruism Forum Is Having A Debate About Objective Morality
    I wrote the first post in this debate--I'd encourage you all to post replies!
    Bentham's Newsletter | 2 days ago
  • The Inheritance of Dreams
    On Dreaming Freely in the Age of AI
    Long Now Foundation | 3 days ago
  • Yes, You Can Argue About Definitions
    Contra Scott Alexander a little bit
    Bentham's Newsletter | 3 days ago
  • Vegan Challenges Are An Underrated Lever For Diet Change
    As part of conducting research into my Tactics in Practice series (reports designed to understand the impact and potential improvements to common interventions to improve animal welfare), I am often a bit let down by what the research says.
    Effective Altruism Forum | 3 days ago
  • How Informed Are We About Companion Animals’ Needs?
    Taking care of a companion animal is a great responsibility. But do we actually know enough to give them what they need?. The post How Informed Are We About Companion Animals’ Needs? appeared first on Faunalytics.
    Faunalytics | 3 days ago
  • Toby Ord on graphs AI companies would prefer you didn’t (fully) understand
    The post Toby Ord on graphs AI companies would prefer you didn’t (fully) understand appeared first on 80,000 Hours.
    80,000 Hours | 3 days ago
  • Senior Financial Accountant
    Senior Financial Accountant admin_inox Tue, 06/24/2025 - 15:00 vacancy_id SYS-1278 location Nairobi, Kenya Contract type Permanent Duration Indefinite Frontend apply URL https://jobs.gainhealth.org/vacancies/1278/apply/ Closing date Sat, 06/28/2025 - 12:00 Department Finance about_the_role <p>The Global Alliance for Improved Nutrition (GAIN) is seeking a Senior Financial...
    Global Alliance for Improved Nutrition | 3 days ago
  • Catastrophic AI misuse
    The post Catastrophic AI misuse appeared first on 80,000 Hours.
    80,000 Hours | 3 days ago
  • Facing a Changing Industry, AI Activists Rethink Their Strategy
    Amba Kak, co-executive director of AI Now and another coauthor of the report, says that her organization “has been quite focused” on government policy as a way to enact change, but adds that it’s become clear those levers will be unsuccessful unless power is built from below. “We need to make sure that AI is […].
    AI Now Institute | 3 days ago
  • Nuclear Risk and AI | Alice Saltini | EAGxNordics 2025
    In an environment where global tensions surge, geopolitical rivalries intensify, and debates over expanding arsenals become ever more heated, the risk emerges that instability—fueled by misperceptions, misinterpretations and communication breakdowns—could inadvertently propel us toward a nuclear confrontation.
    Centre for Effective Altruism | 3 days ago
  • Try o3-pro in ChatGPT for $1
    Is AI a bubble?
    Hauke’s Blog | 3 days ago
  • Finance and Administration Officer
    Finance and Administration Officer admin_inox Tue, 06/24/2025 - 07:00 vacancy_id SYS-1283 location Kampala, Uganda Contract type Fixed Term Duration Other Frontend apply URL https://jobs.gainhealth.org/vacancies/1283/apply/ Closing date Fri, 07/04/2025 - 12:00 Department Finance about_the_role <p>The Global Alliance for Improved Nutrition (GAIN) is seeking a Finance and...
    Global Alliance for Improved Nutrition | 3 days ago
  • Raymond Laflamme (1960-2025)
    Even with everything happening in the Middle East right now, even with (relatedly) everything happening in my own family (my wife and son sheltering in Tel Aviv as Iranian missiles rained down), even with all the rather ill-timed travel I’ve found myself doing as these events unfolded (Ecuador and the Galapagos and now STOC’2025 in […]...
    Shtetl-Optimized | 3 days ago
  • DMT for Cluster Headaches: Aborting and Preventing Extreme Pain with Tryptamines and Other Methods
    Announcement: Do you have experience using psychedelics to treat cluster headaches? Want to support science and advocacy in this area? Submit your personal and/or professional testimonial to our upcoming "ClusterFree" Open Letter initiative: https://docs.google.com/forms/d/e/1FAIpQLScIp0YQ_kH-szLeqqUaiMTRY5MVVTE4BYPkx7EhiZW38i5SVQ/viewform?usp=dialog --- In this interview conducted at...
    Qualia Research Institute | 3 days ago
  • Something’s wrong at Global Action to End Smoking
    I’ve written about foundations and nonprofits for more than a decade, Yet I continue to be surprised and distressed by the indifference, incompetence and arrogance — and especially the lack of accountability — of some foundation boards. Scandal or mismanagement by senior executives of a foundation almost always reflect a failing board.
    Marc Gunther | 3 days ago
  • Zuckerberg AI recruiting 🤖, OpenAI device details 🎧, Python + Mojo 👨‍💻
    TLDR AI | 3 days ago
  • The Lies of Big Bug
    Crosspost of this blog article. The majority of farmed animals killed each year are insects, and this number is only expected to increase. By 2033, it’s estimated that around 5 trillion insects will be slaughtered annually—more than 50 times the number of cows, pigs, chickens, turkeys, and the like currently slaughtered.
    Effective Altruism Forum | 3 days ago
  • Just take the midpoint?
    An intuitive response to imprecise probability, and its limitations
    Jesse’s Substack | 3 days ago
  • Recent progress on the science of evaluations
    Summary: This post presents new methodological innovations presented in the paper General Scales Unlock AI Evaluation with Explanatory and Predictive Power. The paper introduces a set of general (universal) cognitive abilities that allow us to predict and explain AI system behaviour out of distribution.
    AI Alignment Forum | 3 days ago
  • Foom & Doom 1: “Brain in a box in a basement”
    1.1 Series summary and Table of Contents. This is a two-post series on AI “foom” (this post) and “doom” (next post). A decade or two ago, it was pretty common to discuss “foom & doom” scenarios, as advocated especially by Eliezer Yudkowsky.
    AI Alignment Forum | 3 days ago
  • Forecasting AI Forecasting
    TL;DR: This post examines how rapidly AI forecasting ability is improving by analyzing results from ForecastBench, Metaculus AI tournaments, and various prediction platforms. While AI forecasting is steadily improving, there remains significant uncertainty around the timeline for matching top human forecasters—it might be very soon or take several years. ForecastBench.
    AI Alignment Forum | 3 days ago
  • Foom & Doom 2: Technical alignment is hard
    2.1 Summary & Table of contents. This is the second of a two-post series on foom (previous post) and doom (this post). The last post talked about how I expect future AI to be different from present AI.
    AI Alignment Forum | 3 days ago
  • Comparing risk from internally-deployed AI to insider and outsider threats from humans
    I’ve been thinking a lot recently about the relationship between AI control and traditional computer security. Here’s one point that I think is important. My understanding is that there's a big qualitative distinction between two ends of a spectrum of security work that organizations do, that I’ll call “security from outsiders” and “security from insiders”. On the “security from outsiders”...
    AI Alignment Forum | 3 days ago
  • Advancing Data-Driven Policy: How Justice Innovation Lab Uses OSF to Strengthen Randomized Clinical Trial Design and Transparency
    The Justice Innovation Lab (JIL) is a nonprofit organization that works to create a more equitable, effective, and fair criminal justice system by developing and implementing data-informed, community-centered solutions. JIL collaborates with prosecutors, judges, police departments, health agencies, and community organizations to drive meaningful reform and reduce harmful outcomes such as...
    Center for Open Science | 3 days ago
  • Comparing risk from internally-deployed AI to insider and outsider threats from humans
    And why I think insider threat from AI combines the hard parts of both problems.
    Redwood Research | 3 days ago
  • 🟢 US bombs Iran, India again rejects water treaty | Global Risks Weekly Roundup #25/2025.
    Executive summary
    Sentinel | 4 days ago
  • Why I am more optimistic about the animal movement this year
    Sometimes working on animal issues feels like an uphill battle, with alternative protein losing its trendy status with VCs, corporate campaigns hitting blocks in enforcement and veganism being stuck at the same percentage it's been for decades.
    Measured Life | 4 days ago
  • After US strike, Iran faces a desperation threshold
    Iran has no good options, which means bad options may actually play out
    The Power Law | 4 days ago
  • What is marriage for?
    Choosing something you didn't have to choose. The post What is marriage for? appeared first on Otherwise.
    Otherwise | 4 days ago
  • The Lies of Big Bug
    Insect farming was built on a set of foundational premises. They were all wrong.
    Bentham's Newsletter | 4 days ago
  • What Effective Altruists Believe: An Unmanifesto
    This is a rerun of an old post, now with links!
    Thing of Things | 4 days ago
  • Does Reducing Portion Size Help Cut Meat Consumption?
    Researchers tested portion-size nudges in university dining halls and found that moderate meat reductions could work without sacrificing diner satisfaction, but more dramatic cuts backfired and proved counterproductive. The post Does Reducing Portion Size Help Cut Meat Consumption? appeared first on Faunalytics.
    Faunalytics | 4 days ago
  • How New Zealand invented inflation targeting
    The political gamble that made modern central banking
    The Works in Progress Newsletter | 4 days ago
  • Substack's Secret
    just one guy's opinion
    Atoms vs Bits | 4 days ago
  • Arkose is Closing
    Summary: Arkose is an early-stage AI safety fieldbuilding nonprofit focused on accelerating the involvement of experienced machine learning professionals in technical AI safety research through direct outreach, one-on-one calls, and public resources. Between December 2023 and June 2025, we had one-on-one calls with 311 such professionals.
    Effective Altruism Forum | 4 days ago
  • Open Thread 387
    Astral Codex Ten | 4 days ago
  • Import AI 417: Russian LLMs; Huawei's DGX rival; and 24 tokens for training AIs
    What happens when AI systems go from passive to proactive?
    Import AI | 4 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.