Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • If anyone builds it, everyone will plausibly be fine
    I think AI takeover is plausible. But Eliezer’s argument that it’s more than 98% likely to happen does not stand up to scrutiny, and I’m worried that MIRI’s overconfidence has reduced the credibility of the issue. Here is why I think the core argument in "if anyone builds it, everyone dies" is much weaker than the authors claim. This post was written in a personal capacity.
    AI Alignment Forum | 7 hours ago
  • Why AI Will Have Its Own Unintended Motivations
    "We're already seeing that these AIs have drives that nobody asked for, that nobody wanted." "You're gonna have a world where AIs invent things that are to doing good what Oreos are to eating healthy." "That's the default thing that happens when you sort of grow a mind around a training target."
    Future of Life Institute | 9 hours ago
  • The cheapest way to stop animal cruelty
    According to a new commentary paper in the journal Nature Food, some of the worst animal suffering in the world can be prevented at a rate of just a couple of pennies per hour: the extreme pain experienced by chickens raised for meat. Over the last 75 years, chickens have been bred to grow incredibly […]...
    Future Perfect | 13 hours ago
  • Coming Soon: A New Look for OSF
    On October 4, 2025, the Open Science Framework (OSF) will launch a refreshed design that makes the platform easier to navigate and use. The update modernizes the interface and improves performance, while preserving the familiar workflows that users are accustomed to. All OSF projects, registrations, and preprints will remain accessible after the new interface launches.
    Center for Open Science | 14 hours ago
  • Why the Race to Superintelligence Has No Winners
    "All that's standing between us and this going really poorly is the AIs getting smarter and the AI companies are rushing to make them smarter." "If anyone builds it anywhere, everyone everywhere dies." "This race towards super intelligence, there are no winners and it needs to stop."
    Future of Life Institute | 14 hours ago
  • Every Lance Bush Article
    A parody
    Bentham's Newsletter | 15 hours ago
  • It Never Worked Before: Nine Intellectual Jokes
    A curated collection of intellectual jokes that actually teach interesting ideas and make you think, not just rely on insider-jargon and smart-people stereotypes
    Linch Zhang | 15 hours ago
  • Adolescents Are More Speciesist Than Adults
    Most adults express a deep concern for animals, but also continue to eat them. This “meat paradox” begins to take shape in adolescence, when young people’s values towards farmed and companion animals start to shift. The post Adolescents Are More Speciesist Than Adults appeared first on Faunalytics.
    Faunalytics | 15 hours ago
  • Member Spotlight: Collaboration & Stewardship Across the Research Lifecycle at VU
    The following blog is based on a recent webinar with Vrije Universiteit Amsterdam. You can watch the full recording here. Vrije Universiteit Amsterdam (VU) is advancing open science by building coordinated support for their research community through an integrated model that connects infrastructure, training, and policy development.
    Center for Open Science | 16 hours ago
  • Hidden Open Thread 399.5
    Astral Codex Ten | 16 hours ago
  • Defining Defending Democracy: Contra The Election Winner Argument
    Astral Codex Ten | 16 hours ago
  • Would democracy survive an AGI-supercharged economy?
    AGI could lead to growth of 30% — and double digit unemployment. It’s not clear that democratic institutions would be able to survive...
    Transformer | 16 hours ago
  • If We Build AI Superintelligence, Do We All Die?
    If you're not at least a little doomy about AI, you're not paying attention
    The Power Law | 17 hours ago
  • Issue 20: The death rays that guard life
    Plus: How to make antibodies, why rivers are now battlefields, and a clinical trial for cooling the plant
    The Works in Progress Newsletter | 17 hours ago
  • Why Building Superintelligence Means Human Extinction (with Nate Soares)
    Nate Soares is president of the Machine Intelligence Research Institute. He joins the podcast to discuss his new book "If Anyone Builds It, Everyone Dies," co-authored with Eliezer Yudkowsky. We explore why current AI systems are "grown not crafted," making them unpredictable and difficult to control.
    Future of Life Institute | 18 hours ago
  • Beijing blocks Nvidia sales in vote of confidence for Chinese chipmakers, the White House strikes a deal for 10 percent of Intel, and Commerce voids a $7.4B CHIPS Act grant
    Plus: Google avoids breakup in its antitrust case, OpenAI gets closer to long-awaited governance change, and Anthropic settles its training lawsuit for $3000 per book. The post Beijing blocks Nvidia sales in vote of confidence for Chinese chipmakers, the White House strikes a deal for 10 percent of Intel, and Commerce voids a $7.4B CHIPS Act grant appeared first on Center for Security and...
    Policy.ai | 18 hours ago
  • Sightsavers accessibility pack nominated for Zero Project award
    The accessibility toolkit, which helps people to create user-friendly communications, has been shortlisted for a global disability inclusion award.
    Sightsavers | 21 hours ago
  • My new book — Clearing the Air — is published in the UK today
    A Hopeful Guide to Solving Climate Change — in 50 Questions and Answers...
    Sustainability by Numbers | 1 days ago
  • Meta's next-gen glasses 👓, China bans Nvidia 🚫, UUIDv47 👨‍💻
    TLDR AI | 1 days ago
  • More Was Possible: A Review of If Anyone Builds It, Everyone Dies
    Asterisk | 1 days ago
  • Stress Testing Deliberative Alignment for Anti-Scheming Training
    Twitter | Microsite | Apollo Blog | OpenAI Blog | Full paper. Before we observe scheming, where models covertly pursue long-term misaligned goals, models might inconsistently engage in various covert behaviors such as lying, sabotage, or sandbagging.
    AI Alignment Forum | 1 days ago
  • What training data should developers filter to reduce risk from misaligned AI? An initial narrow proposal
    One potentially powerful way to change the properties of AI models is to change their training data. For example, Anthropic has explored filtering training data to mitigate bio misuse risk. What data, if any, should be filtered to reduce misalignment risk? In this post, I argue that the highest ROI data to filter is information about safety measures, and strategies for subverting them.
    AI Alignment Forum | 1 days ago
  • Proof Section to Crisp Supra-Decision Processes
    This post accompanies Crisp Supra-Decision Processes and contains the proof of the following proposition. Proposition 1 [Alexander Appel (@Diffractor), Vanessa Kosoy (@Vanessa Kosoy)]: Let M=(S,s0,A,O,T,B,L,γ) be a crisp supra-MDP with geometric time discount such that S and A are finite. Then there exists a stationary optimal policy. Proof: We first recall some notation.
    AI Alignment Forum | 1 days ago
  • Crisp Supra-Decision Processes
    Introduction. In this post, we describe a generalization of Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs) called crisp supra-MDPs and supra-POMDPs. The new feature of these decision processes is that the stochastic transition dynamics are multivalued, i.e. specified by credal sets.
    AI Alignment Forum | 1 days ago
  • FTX, Golden Geese, and The Widow’s Mite
    From 2019 to 2022, the cryptocurrency exchange FTX stole 8-10 billion dollars from customers. In summer 2022, FTX’s charitable arm gave me two grants totaling $33,000. By the time the theft was revealed in November 2022, I’d spent all but 20% of it. The remaining money isn’t mine, and I don’t want it. I would … Continue reading "FTX, Golden Geese, and The Widow’s Mite"...
    Aceso Under Glass | 1 days ago
  • La gran idea: ¿cómo podemos vivir éticamente en un mundo en crisis?
    Este artículo utiliza la analogía de una zona de conflicto caótica para ilustrar la naturaleza omnipresente del sufrimiento global y la obligación moral de abordarlo de manera eficaz. La vida cotidiana ofrece numerosas oportunidades para aliviar el sufrimiento, análogas a encontrarse con personas heridas, amenazas inminentes y crisis lejanas en una zona de guerra.
    Altruismo Eficaz | 1 days ago
  • Statement from Mercy For Animals on H5N1 avian flu outbreak in Nebraska
    H5N1 avian influenza has been confirmed in a Nebraska dairy herd—marking the state’s first case in cattle and underscoring the virus’s continuing spread beyond poultry. Avian influenza continues to circulate among wild birds, cattle, and other mammals, making outbreaks like this persistent and increasingly threatening to animal welfare, food security, and public health. The following […].
    Mercy for Animals | 2 days ago
  • AI-enhanced decision making
    The post AI-enhanced decision making appeared first on 80,000 Hours.
    80,000 Hours | 2 days ago
  • AI-enhanced decision making
    The post AI-enhanced decision making appeared first on 80,000 Hours.
    80,000 Hours | 2 days ago
  • AI rules in California, finance to shrimp, and more
    AI rules in California, finance to shrimp, and more View this email in your browser Hello! Our favourite links this month include: Lewis Bollard’s TED talk on why you should speak up about farmed animal welfare. An update on the progress of AI regulation in California.
    Effective Altruism Newsletter | 2 days ago
  • What training data should developers filter to reduce risk from misaligned AI?
    An initial narrow proposal
    Redwood Research | 2 days ago
  • SFF 2025 funding by cause area: $34 million to AI (86%), bio (7%), etc.
    Summary: this year the Survival and Flourishing Fund allocated $34.33 million to organizations working to secure humanity’s future, with the vast majority going to AI (~$29MM), followed by biosecurity (~$2.5MM), and the rest going to various other causes as diverse as fertility, longevity, forecasting, memetics, math research, EA community building, and non-AI/bio global catastrophic risk...
    Effective Altruism Forum | 2 days ago
  • Pioneering AI-Powered Weather Forecasting for 38 Million Indian Farmers
    The post Pioneering AI-Powered Weather Forecasting for 38 Million Indian Farmers appeared first on Precision Development (PxD).
    Precision Development | 2 days ago
  • New Conversations with Pablos Holman, Fin Moorhouse, Andrew White, & Sam Arbesman
    Ideas worth building the future around.
    Foresight Institute | 2 days ago
  • The Uncomfortable Truth About Political Assassinations
    What cannot be said
    Bentham's Newsletter | 2 days ago
  • An Order-of-Magnitude Increase in Outreach Resulted in a Smaller Club Intro Meeting
    In Fall 2023, EA Purdue barely existed. In fall 2024, EA Purdue had 35 people attend its intro meeting. In Fall 2025, EA Purdue had 33 new members attend its intro meeting despite doing dramatically more outreach. Below, outreach method and scale is specified. It is followed by a discussion of the implications for university group organizers, community builders, and a request for help.
    Effective Altruism Forum | 2 days ago
  • Tactics In Practice: The Science Of Making And Keeping Veg*ns
    In this deep dive, we use research and visual guides to walk you through the process of encouraging new people to go veg, and strategies for helping them stay veg. The post Tactics In Practice: The Science Of Making And Keeping Veg*ns appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Common Veterinary Drugs Pose Hidden Threat To Garden Birds
    Insecticides previously banned in farming have been found in the nests of blue tits and great tits from a surprising source: companion animal fur. The post Common Veterinary Drugs Pose Hidden Threat To Garden Birds appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Scaling Altruism 2.0: Seeking Local Contacts for 2025 Fall Cohort
    TL;DR: After our spring pilot with 4 local communities, 35 participants, and valuable learnings, we're launching an improved version of Scaling Altruism for Fall 2025. We're seeking representatives of local communities who want to run introductory courses with reduced operational burden through centralized coordination. What is Scaling Altruism?.
    Effective Altruism Forum | 2 days ago
  • Subscribe to the print edition of Works in Progress
    A magazine worthy of our readers
    The Works in Progress Newsletter | 2 days ago
  • A Guaranteed Income Won’t Stop People From Wanting to Work
    A Guaranteed Income Won’t Stop People From Wanting to Work Economists worry that a ‘universal basic income’ would make recipients lazier. Data from programs around the world suggests the opposite is true. spriyabalasubr… Wed, 09/17/2025 - 10:32...
    J-PAL | 2 days ago
  • Can open-weight models ever be safe?
    Opinion: Bengüsu Özcan, Alex Petropoulos and Max Reddel argue that technical safeguards, societal preparedness, and new standards could make open-weight models safer...
    Transformer | 2 days ago
  • EA Forum Digest #258
    EA Forum Digest #258 Hello!. Joey Savoie, co-founder and CEO of Ambitious Impact (formerly Charity Entrepreneurship) will be answering questions on his AMA this Saturday. Get your questions in now. Also, applications for EA Connect 2025 — CEA’s virtual conference offering — are open.
    EA Forum Digest | 2 days ago
  • “AI will kill everyone” is not an argument. It’s a worldview.
    You’ve probably seen this one before: first it looks like a rabbit. You’re totally sure: yes, that’s a rabbit! But then — wait, no — it’s a duck. Definitely, absolutely a duck. A few seconds later, it’s flipped again, and all you can see is rabbit. The feeling of looking at that classic optical illusion is […]...
    Future Perfect | 2 days ago
  • The great mosquito resurgence
    At Vox’s climate desk, we’ve spent the past few months digging into a threat that’s easy to swat away in the moment — but increasingly harder to escape: the rise of mosquito and other vector-borne diseases in the United States. Most of us think of mosquitoes as little more than a summer nuisance. But climate […]...
    Future Perfect | 2 days ago
  • Inside Texas’s grand laboratory of dangerous mosquitoes
    Austin, Texas — Under a microscope, a mosquito can look stunning. Their blue-green iridescent scales, purple bands, and attractive spotted wings shimmer — dazzling enough to forget, for a moment, the insect lives to take a sip of your blood. Mosquitoes range in size, from smaller than your pinky fingernail to a commanding presence in […]...
    Future Perfect | 2 days ago
  • [Article] AI Tools for Existential Security
    This is a narration of ‘AI Tools for Existential Security’ by Lizka Vaintrob and Owen Cotton-Barratt; published 14th March 2025. Narration by Perrin Walker (@perrinjwalker).
    ForeCast | 2 days ago
  • Positive Wild Animal Welfare
    This is a linkpost for Positive Wild Animal Welfare by Heather Browning and Walter Veit, which was originally published on Biology & Philosophy on 12 March 2023. The abstract and last paragraph of the introduction are below. I am still guessing soil animals have negative lives, but I have been very uncertain. Abstract.
    Effective Altruism Forum | 2 days ago
  • I enjoyed most of IABIED
    I listened to "If Anyone Builds It, Everyone Dies" today. I think the first two parts of the book are the best available explanation of the basic case for AI misalignment risk for a general audience. I thought the last part was pretty bad, and probably recommend skipping it.
    Effective Altruism Forum | 2 days ago
  • Global Fund Results Report 2025
    The 2025 Results Report captures a pivotal moment in the Global Fund partnership’s fight against HIV, tuberculosis (TB) and malaria. After decades of progress, global health is in crisis. Declining international funding is jeopardizing the fight against AIDS, TB and malaria – and with it, global health security. Since Global Fund’s inception in 2002, health […].
    Target Malaria | 2 days ago
  • What's going on in video in AI Safety these days? (A list)
    My write-up as of September 2025 - feel free to ask me for a more updated Source of Truth if you're interested in this space - and please let me know what I’m missing!. Key Players Making Video: Organization. Palisade Research:Does research on AI for policymakers and communicators. Has started its own video program - check out their videos!
    Effective Altruism Forum | 2 days ago
  • Oyin Solebo joins GiveDirectly Board of Directors
    With nearly $1 billion delivered to people in poverty, GiveDirectly is entering a bold new phase: scaling our programs while innovating faster to meet our goal of delivering $5 billion by 2035. At this inflection point, we’re proud to welcome Oyin Solebo to GiveDirectly’s Board of Directors. Solebo brings unmatched experience at the intersection of […]...
    GiveDirectly | 2 days ago
  • Statement from Mercy For Animals scientists on H5N1 avian flu outbreak in South Dakota
    H5N1 avian influenza at a South Dakota turkey operation—resulting in the killing of more than 55,000 birds—has been confirmed. Avian influenza continues to circulate among wild birds, cattle and other mammals, making outbreaks like this both persistent and increasingly threatening to animal welfare, food security and public health. The following statement on South Dakota’s avian […].
    Mercy for Animals | 2 days ago
  • Reaching More Women Than Ever Before
    There are two things we know with absolute certainty. First, more than one million women are needlessly suffering from childbirth injuries such as fistula. Second, with a simple surgery, it is possible to restore a woman’s health and give her a new chance at life. To bring more surgeries to more women around the world, … Continued.
    Fistula Foundation | 2 days ago
  • TikTok buyers revealed 📱, Jack Ma returns 👋, measuring AI dev impact 👨‍💻
    TLDR AI | 2 days ago
  • Survey for the first AVA Academy Europe
    We’re excited to share that the first AVA Academy in Europe is in the works! . The first AVA Academy in Europe, organized by AVA International, will take place in Berlin, Germany set for late March 2026. This program will be designed specifically for animal advocates across the region—to help you grow your skills, build powerful networks, and strengthen the future of the movement.
    Animal Advocacy Forum | 2 days ago
  • Rethinking The Impact Of AI Safety Videos: Extending Austin & Marcus' framework
    A few days ago, Austin Chen and Marcus Abramovitch published How cost-effective are AI safety YouTubers?, an "Early work on "GiveWell for AI Safety"", ranking different interventions in the AI Safety Video space, using a framework that measured impact by basically multiplying watchtime by three quality factors (Quality of Audience, Fidelity of Message and Alignment of Message).
    Effective Altruism Forum | 2 days ago
  • The Center for AI Policy Has Shut Down
    And the need for more AIS advocacy work. Executive Summary. The Center for AI Policy (CAIP) is no more. CAIP was an advocacy organization that worked to raise policymakers’ awareness of the catastrophic risks from AI and to promote ambitious legislative solutions.
    Effective Altruism Forum | 2 days ago
  • 100 years of insulin in 15 minutes
    Episode three of Hard Drugs is about how our need to produce insulin kickstarted the modern biotech industry
    The Works in Progress Newsletter | 3 days ago
  • Rarely Noticed Choices That Shape Your Debates
    Key Takeaways Many debates hinge on definitions. Disagreements about God, personhood, love, justice and more can all turn on how key...
    Clearer Thinking | 3 days ago
  • Japan’s Biggest Food Companies Are Falling Behind on Animal Welfare. Here Is Who’s Leading and Who’s Not
    An inaugural report from Mercy For Animals reveals the urgent need for Japanese companies to take action and join the global cage-free movement. The post Japan’s Biggest Food Companies Are Falling Behind on Animal Welfare. Here Is Who’s Leading and Who’s Not appeared first on Mercy For Animals.
    Mercy for Animals | 3 days ago
  • AI scaling & scientific R&D by 2030
    What will AI look like by 2030 if current trends hold? What does it mean for AI capabilities in scientific R&D?
    Epoch Newsletter | 3 days ago
  • Things You Learn Selling Sex In The Bay Area
    [This is a guest blog post from a writer who wishes to remain anonymous about her experience selling sex in the Bay Area.]
    Thing of Things | 3 days ago
  • A Rule For Social Media Engagement
    Try to avoid adding fuel to the fire for no reason
    Bentham's Newsletter | 3 days ago
  • Hope For Hens: Can Cage-Free Eggs Take Off In China?
    As China rethinks its approach to animal welfare, egg producers are weighing the switch to cage-free systems. This research gives us insight into what’s working, what’s holding them back, and how advocates can help. The post Hope For Hens: Can Cage-Free Eggs Take Off In China? appeared first on Faunalytics.
    Faunalytics | 3 days ago
  • Book Review: 'If Anyone Builds It, Everyone Dies'
    Eliezer Yudkowsky and Nate Soares’ new book should be an AI wakeup call — shame it’s such a chore to read...
    Transformer | 3 days ago
  • What’s new in biology: September 2025
    Gene therapy, narcolepsy drugs, parasite removal, protein nanoparticles, the 3D structure of genomes, and more.
    The Works in Progress Newsletter | 3 days ago
  • The Electrotech Revolution: Some insights into a new way of thinking about the transition
    From burning old sunshine to using it in real-time.
    Sustainability by Numbers | 3 days ago
  • LLM AGI may reason about its goals and discover misalignments by default
    Epistemic status: These questions seem useful to me, but I'm biased. I'm interested in your thoughts on any portion you read. If our first AGI is based on current LLMs and alignment strategies, is it likely to be adequately aligned? Opinions and intuitions vary widely. As a lens to analyze this question, let's consider such a proto-AGI reasoning about its goals.
    AI Alignment Forum | 3 days ago
  • A recurrent CNN finds maze paths by filling dead-ends
    Work done as part of my work with FAR AI, back in February 2023. It's a small result but I want to get it out of my drafts folder. It was the start of the research that led to interpreting the Sokoban planning RNN. I was trying to study neural networks that plan, in order to have examples of mesa-optimizers. I trained the recurrent maze CNN from Bansal et al.
    AI Alignment Forum | 3 days ago
  • We mapped the EA Forum's intellectual landscape - read about epistemic quality analysis, forum vibes, author networks, and a new writing tool we developed
    We scraped all EA forum posts & comments from 2024 and 2025 to use as a test bed for sense making, analysis, and AI-powered epistemic grading as a part of the Future of Life Foundation’s Fellowship on AI for Human Reasoning. (We’re happy to share all code and all datasets, etc. to interested parties. Just reach out!).
    Effective Altruism Forum | 3 days ago
  • [Linkpost] 80,000 Hours review: 2023 to mid-2025
    We’ve released our review of our programmes for the years 2023 to mid-2025. The full review is available on our website, and we’re sharing the summary below. You can find our previous reviews here. Summary: Key updates since 2022:
    Effective Altruism Forum | 3 days ago
  • What will AI look like in 2030?
    This report was commissioned by Google DeepMind. All points of views and conclusions expressed are those of the authors and do not necessarily reflect the position or endorsement of Google DeepMind.
    Epoch Blog | 3 days ago
  • TikTok deal 📱, GPT-5-Codex 🤖, Alzheimer's reversal trial 💊
    TLDR AI | 3 days ago
  • Mercy For Animals releases its inaugural Japan animal welfare report
    New cage-free egg report reveals major opportunity to improve animal welfare in Japan’s food industry TOKYO — On September 15, Mercy For Animals released Animal Welfare Report 2025: Japan, its inaugural animal welfare report for Japan. The first of its kind, this report examines and evaluates cage-free egg initiatives at Japanese food companies, revealing that […].
    Mercy for Animals | 3 days ago
  • The Homework: September 15, 2025
    Welcome to the September 15, 2025 Main edition of The Homework, the official newsletter of California YIMBY — legislative updates, news clips, housing research and analysis, and the latest writings from the California YIMBY team. News from Sacramento HISTORIC VICTORY…. The post The Homework: September <span class="dewidow">15, 2025</span> appeared first on California YIMBY.
    California YIMBY | 3 days ago
  • Marginal Persuasion
    Consider which direction of change would be an improvement
    Good Thoughts | 3 days ago
  • 🟢 Russian drones shot down over Poland, Israel strikes Qatar, Trump ally assassinated | Global Risks Weekly Roundup #37/2025.
    OpenAI and Oracle signed a $300B deal, which would lead to Oracle displacing Microsoft to become OpenAI’s largest compute provider; the deal lifted Oracle’s stock price.
    Sentinel | 4 days ago
  • Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)
    The post Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2) appeared first on 80,000 Hours.
    80,000 Hours | 4 days ago
  • The EU AI Act Newsletter #86: Concerns Around GPT-5 Compliance
    Concerns have been raised about OpenAI’s compliance with the EU AI Act requirements for its recently released GPT-5 model, particularly regarding the disclosure of training data.
    The EU AI Act Newsletter | 4 days ago
  • The Full Quote Says...
    A thing that bothers me
    Atoms vs Bits | 4 days ago
  • From Fossils to Philosophy: A Deep Dive with Daniel Muñoz on Ethics, Discourse, and What We Owe to Ourselves
    A recording from Linch's live video
    Linch Zhang | 4 days ago
  • Stress Reduction For Rodents In Laboratories: Better, But Not Great
    Researchers conducted analyses of over 200 studies on rats and mice in laboratories and concluded that “enriched” housing leads to reduced stress and improved health. The post Stress Reduction For Rodents In Laboratories: Better, But Not Great appeared first on Faunalytics.
    Faunalytics | 4 days ago
  • Open Source Ecosystem Project Spotlight: GakuNin RDM
    GakuNin RDM is a research data management (RDM) service provided by the National Institute of Informatics (NII) and designed to promote open science across universities and research institutions in Japan. Developed with sustained support from the Japanese government and funding agencies, the platform builds on the Open Science Framework (OSF)’s architecture, leveraging its features and...
    Center for Open Science | 4 days ago
  • Open Source Ecosystem Project Spotlight: Exporting OSF Projects to PDF
    What if you could generate a detailed PDF snapshot of your entire OSF project—including metadata, wikis, files, contributors, components, and logs? A new project led by Ramiro Bravo from the University of Manchester (UoM) is working to make this feature a reality.
    Center for Open Science | 4 days ago
  • Open Source Ecosystem Project Spotlight: Accessible Content Optimization for Research Needs (ACORN)
    Researchers generate vast amounts of data, but making that information easy to find, understand, and use is still a challenge. Accessible Content Optimization for Research Needs (ACORN) is a command-line multitool designed to streamline and improve the accessibility and usability of research activity data (RAD).
    Center for Open Science | 4 days ago
  • Open Source Ecosystem Project Spotlight: DataPipe
    DataPipe is a tool that enables researchers to save data from a behavioral experiment directly to the Open Science Framework (OSF). Developed by Joshua R. de Leeuw, Associate Professor of Cognitive Science at Vassar College, DataPipe aims to simplify the process of implementing born-open data for behavioral researchers.
    Center for Open Science | 4 days ago
  • Hiring struggles are plaguing the EU AI Office
    Key leadership roles, including a head of the AI Office safety unit, have yet to be hired
    Transformer | 4 days ago
  • Les femmes de Touba
    Voici l’histoire de cinq femmes de Touba : Amy, Nar, Ndiatté, Aminata et Ndack. Chacune d’entre elles est atteinte d’un trachome avancé et elles ont toutes bénéficié d’une opération chirurgicale leur ayant sauvé la vue dans le cadre des efforts déployés par le Sénégal pour éliminer cette maladie.
    Sightsavers | 4 days ago
  • ChinAI #328: The Cold Reality for Chinese AI Start-ups
    Greetings from a world where…...
    ChinAI Newsletter | 4 days ago
  • The Night I Accidentally Stepped on a Snail
    The Incident. It was 11 PM on a rainy August evening when my partner and I were walking back from a coffee shop in London, and I accidentally stepped on a snail. Its shell shattered completely under my weight, although it was still moving.
    Effective Altruism Forum | 4 days ago
  • How market design can feed the poor
    America's largest non-profit had a broken distribution system. University of Chicago economists fixed it.
    The Works in Progress Newsletter | 4 days ago
  • Opinion | Ideas for smarter growth: India’s bet for an inclusive future
    Opinion | Ideas for smarter growth: India’s bet for an inclusive future In a new op-ed for The Hindu, Saptarishi Dutta, Sharanya Chandran, and Vijayalakshmi Iyer of J-PAL South Asia argue that India’s path to becoming a high-income country must be rooted in evidence.
    J-PAL | 4 days ago
  • The women of Touba
    This is the story of five women from Touba, Senegal. Each of them had advanced trachoma, and they all received sight-saving operations as part of Senegal’s journey to eliminate the disease.
    Sightsavers | 4 days ago
  • The impacts of AI on animal advocacy
    Introduction. The role of artificial intelligence (AI) in our everyday lives is rapidly accelerating. Take ChatGPT: released less than three years ago, it now has over a billion weekly users, and is already vastly more capable than the original version. The same goes for other popular AI models, like Google’s Gemini or Anthropic’s Claude.
    Effective Altruism Forum | 4 days ago
  • For the first time, more kids are obese than underweight
    Something striking just happened in global nutrition: As of 2025, children worldwide are now more likely to be obese than underweight. According to UNICEF’s new Child Nutrition Report, about 9.4 percent of school-age kids (ages 5–19) are living with obesity, compared to 9.2 percent who are underweight. Twenty-five years ago, the gap was much wider: […]...
    Future Perfect | 4 days ago
  • Indirect effects in longtermism
    I'm presenting a talk at EAG NYC next month on the topic of indirect effects. Not to spoil the talk too much, but the broad theme is that wild animal welfare and longtermism are in similar epistemic positions with regard to (different types of) indirect effects, and that it could be instructive to compare how the two different communities approach uncertainty about these effects.
    Effective Altruism Forum | 4 days ago
  • A self-driving car traffic jam is coming for US cities
    A century ago, a deluge of automobiles swept across the United States, upending city life in its wake. Pedestrian deaths surged. Streetcars, unable to navigate the choking traffic, collapsed. Car owners infuriated residents with their klaxons’ ear-splitting awooogah! Scrambling to accommodate the swarm of motor vehicles, local officials paved over green space, whittled down sidewalks […]...
    Future Perfect | 4 days ago
  • Open Thread 399
    Astral Codex Ten | 4 days ago
  • 80,000 Hours review: 2023 to mid-2025
    The post 80,000 Hours review: 2023 to mid-2025 appeared first on 80,000 Hours.
    80,000 Hours | 4 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.