Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Pancreatic cancer just met its match
    A disease that was once a death sentence is increasingly treatable
    The Works in Progress Newsletter | 39 minutes ago
  • Outrage Grows in Chicago and Atlanta as Kroger Faces Backlash Over Broken Cage-Free Promise
    Local shoppers pressure one of the nation’s largest grocers after failing to fulfill their 2025 commitment LOS ANGELES — Kroger promised customers it would go 100% cage-free. Instead, the nation’s number one supermarket chain failed to deliver, leaving millions of hens confined in cages across its supply chain, raising serious concerns about corporate accountability and […].
    Mercy for Animals | 4 hours ago
  • On the Race for California Governor: An Abundance of Pro-Housing Candidates
    For the past decade, the fight to make it legal and feasible to build housing at scale in California felt Sisyphean. California YIMBY and our allies pushed against exclusionary land use policies, and a political class content to blame the…. The post On the Race for California Governor: An Abundance of <span class="dewidow">Pro-Housing Candidates</span> appeared first on California YIMBY.
    California YIMBY | 4 hours ago
  • Why You Can't Use Your Right to Try
    The Availability Problem: Imagine you have cancer, or chronic pain, or a progressive degenerative disease of some sort. You have exhausted the traditional treatment options available to you, and none of them have worked. However, there are treatments that are still undergoing clinical trials which might help you.
    LessWrong | 14 hours ago
  • New York Advances Landmark Legislation to Ban Octopus Factory Farming
    New York lawmakers are advancing legislation that could make the state the first on the East Coast to preemptively ban octopus factory farming, a practice scientists and advocates warn would pose significant animal welfare and environmental concerns. This week, a key Assembly bill advanced out of committee with a favorable vote, marking a major step […].
    Mercy for Animals | 14 hours ago
  • GiveWell Opens RFI for Malaria Pilots and Research
    GiveWell is launching a new request for information (RFI) to expand and strengthen our malaria grantmaking in Africa and help our donors make a greater impact. Expressions of interest can be submitted through one of two tracks, the first for malaria chemoprevention and vector control pilot programs and the second for research and evaluation.
    GiveWell | 14 hours ago
  • How useful is the information you get from working inside an AI company?
    This post was drafted by Buck, and substantially edited by Anders. "I" refers to Buck. Thanks to Alex Mallen for comments. People who work inside AI companies get access to information that I only get later or never. Quantitatively, how big a deal is this access?. Here’s an operationalization of this. Consider the following two ways my knowledge could be augmented:
    LessWrong | 15 hours ago
  • Empowerment, corrigibility, etc. are simple abstractions (of a messed-up ontology)
    1.1 Tl;dr. Alignment is often conceptualized as AIs helping humans achieve their goals: AIs that increase people’s agency and empowerment; AIs that are helpful, corrigible, and/or obedient; AIs that avoid manipulating people. But that last one—manipulation—points to a challenge for all these desiderata: a human’s goals are themselves under-determined and manipulable, and it’s awfully hard to...
    LessWrong | 15 hours ago
  • Empowerment, corrigibility, etc. are simple abstractions (of a messed-up ontology)
    1.1 Tl;dr. Alignment is often conceptualized as AIs helping humans achieve their goals: AIs that increase people’s agency and empowerment; AIs that are helpful, corrigible, and/or obedient; AIs that avoid manipulating people. But that last one—manipulation—points to a challenge for all these desiderata: a human’s goals are themselves under-determined and manipulable, and it’s awfully hard to...
    AI Alignment Forum | 17 hours ago
  • Exporters Without Borders: Why You Should Start a Company Instead of Working in Aid
    This is a crosspost of the full text of Exporters Without Borders: Why You Should Start a Company Instead of Working in Aid from In Development, made for the EA Forum's In Development Highlight Week. If you enjoy the article, you can subscribe to In Development's substack here. June Jambiha was a quintessential hustler.
    Effective Altruism Forum | 17 hours ago
  • Who Got Breasts First and How We Got Them
    It really is Sydney Sweeney’s world, and we’re all just living in it. Human female breasts are an evolutionary mystery along several dimensions. First, breast permanence is unique to humans. All other mammals develop breast prominence during pregnancy or nursing, and the mammary tissue recedes after weaning. This process is called “involution”.
    LessWrong | 17 hours ago
  • Why We Should Build AI Tools, Not AI Replacements (with Anthony Aguirre)
    Anthony Aguirre is the CEO of the Future of Life Institute. He joins the podcast to discuss A Better Path for AI, his essay series on steering AI away from races to replace people. The conversation covers races for attention, attachment, automation, and superintelligence, and how these can concentrate power and undermine human agency.
    Future of Life Institute | 17 hours ago
  • 🟡 US-Iran stalemate continues, Putin says Ukraine war may come to an end, White House considers AI executive order || Global Risks Weekly Roundup #19/2026
    Executive summary
    Sentinel | 17 hours ago
  • Effective Altruism Australia is launching a new podcast - designed for a broad audience
    More Than Good is a new podcast from Effective Altruism Australia, aimed at introducing the ideas and principles of effective altruism to a broader audience. The episodes are framed around moral questions and how people think about doing good, covering topics like global inequality, animal welfare, ethics, philosophy and more. For a global movement, there is relatively little content that is...
    Effective Altruism Forum | 18 hours ago
  • Anthropic’s strange fixation on hyperstition
    In a recent tweet, Anthropic seems to have asserted that hyperstition is responsible for observed misalignment in their AIs. Strangely, the research they use as evidence actually doesn’t seem to be related to hyperstition at all?
    LessWrong | 18 hours ago
  • The Homework: May 11, 2026
    Welcome to the May 11, 2026 Main edition of The Homework, the official newsletter of California YIMBY — legislative updates, news clips, housing research and analysis, and the latest writings from the California YIMBY team. News from Sacramento We’re in…. The post The Homework: May <span class="dewidow">11, 2026</span> appeared first on California YIMBY.
    California YIMBY | 19 hours ago
  • I Attended A Lecture by William Lane Craig: Here Were My Problems With It
    On inflating your case
    Bentham's Newsletter | 19 hours ago
  • How useful is the information you get from working inside an AI company?
    My median guess: it's as good as a crystal ball that sees 2.5 months into the future.
    Redwood Research | 19 hours ago
  • Halal’s Animal Welfare Gap: What Muslim Consumers Believe And Know
    A survey of Muslim consumers in Türkiye revealed significant gaps in public awareness around animal welfare in halal practices. However, many demonstrated a willingness to change their behavior when given accurate information. The post Halal’s Animal Welfare Gap: What Muslim Consumers Believe And Know appeared first on Faunalytics.
    Faunalytics | 20 hours ago
  • Bumble Bees Spread String Pulling Through Social Learning
    In this experiment, bumble bees learned to pull strings to access rewards, with behavior spreading within and between colonies. The post Bumble Bees Spread String Pulling Through Social Learning appeared first on Faunalytics.
    Faunalytics | 20 hours ago
  • Introducing the COS Open Scholarship Training for Researchers Series
    The Center for Open Science (COS) is introducing the Open Scholarship Training for Researchers Series, a collection of seven self-paced online courses developed by COS in response to what researchers have told us they actually need. Enrollment is now open for the first two courses, with additional courses launching through Winter 2026.
    Center for Open Science | 21 hours ago
  • Viren Jain | Connectomics and AI @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 21 hours ago
  • Steve Jurvetson | Investing in AI Moonshots @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 21 hours ago
  • Sonia Arrison | Lobbying for Longevity Progress @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 21 hours ago
  • Richard Ngo | Identity & Meaning in SciFi Futures @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 21 hours ago
  • Joshua Elliott | The Hail Mary Phase of Climate Change @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 21 hours ago
  • John Hallman & Rico Meinl | Accelerating Life Sciences @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 22 hours ago
  • Jesse Posner | Fiduciary AI: The New Architecture of Freedom @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 22 hours ago
  • Haleh Fotowat | Harnessing Biological Intelligence for Building Living Machines with Nervous Systems
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 22 hours ago
  • Eli Dourado | Thoughts on Philanthrocapitalism @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 22 hours ago
  • Ed Boyden | Technological Path to Whole Brain Simulation @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 22 hours ago
  • David Eagleman | How Might AI Build us Into Better Humans @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 22 hours ago
  • Corey Hudson | Catalyzing Generative Protein @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 22 hours ago
  • Ariel Ekblaw | Self-Assembling Space Structures @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. . Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 22 hours ago
  • Ant Rowstron and Ilan Gur Fireside Chat @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 22 hours ago
  • Andrew Payne | PRISM Optical Connectomics @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 22 hours ago
  • Import AI 456: RSI and economic growth; radical optionality for AI regulation; and a neural computer
    What laws does superintelligence demand?
    Import AI | 22 hours ago
  • ChinAI #358: Around the Horn (25th episode)
    Greetings from a world where…...
    ChinAI Newsletter | 22 hours ago
  • You Didn't Build That
    an oddly specific but brief gripe-post
    Atoms vs Bits | 24 hours ago
  • Hantavirus won't be the next COVID
    A forecaster's breakdown of the Hondius cruise ship outbreak
    The Power Law | 24 hours ago
  • How the AI Labs Make Profit (Maybe, Eventually)
    I wrote this essay as a submission to Dwarkesh Patel’s blog prize, though I have been meaning to write this up for a while. Usually, for a company to become profitable, they need to increase revenue, decrease costs, or some mixture of the two.
    LessWrong | 1 days ago
  • Weaponized self-doubt
    The biggest hook they had in me was this fear that I’m dangerously inadequate and *they* somehow held the keys to mitigating that.
    Holly Elmore | 1 days ago
  • Open Thread 433
    Astral Codex Ten | 1 days ago
  • Writing children, and paying attention
    What I'm reading, May '26, pt.1
    Raising Dust | 1 days ago
  • How AI in Context approaches thumbnails
    I used an LLM to help draft this post, but I’ve edited/rewritten it extensively and endorse it. AI in Context is a channel about transformative AI and its risks, published by 80,000 Hours. Writing up our current approach to thumbnails, which is nowhere near perfect, for easy shareability and cross-pollination of lessons. Would love to hear what other people are trying!. Making thumbnails.
    Effective Altruism Forum | 1 days ago
  • Donation Timing Under Uncertainty About AI Timelines
    A few years back, I got a big pile of money from working at a tech startup. I put a lot of that money into a donor-advised fund. Since now I make hardly any money, that DAF might represent the majority of my lifetime donations. How much of my DAF should I donate per year?. In particular, how much should I donate in light of short AI timelines?. I created a simple model to answer this question.
    Philosophical Multicore | 1 days ago
  • The mythical median voter
    Most people have an above average number of legs, and what that means for our political imagination
    Reasonable People | 1 days ago
  • Book review: Girl Scout Handbook 1956
    And a review of girl scouting in general. The post Book review: Girl Scout Handbook 1956 appeared first on Otherwise.
    Otherwise | 1 days ago
  • 10 big projects for reducing bio x-risk
    Engineered pathogens pose a grave threat to society, plausibly constituting an existential risk (‘x-risk’) to humanity. Yet remarkably few people are working full-time on this problem. By my count, there are ~160 people on the planet whose full-time job is reducing bio x-risk. This entire group could fit on a single short-haul flight.
    Effective Altruism Forum | 1 days ago
  • Inside Meta AI rollout 💼 , OpenAI cash outs 💰, code maintenance costs 👨‍💻
    TLDR AI | 1 days ago
  • Childhood stunting fell dramatically over the 20th century
    What can countries with high stunting rates today learn from Japan’s experience of going from 70% to 5%?
    Our World in Data | 1 days ago
  • The Trevisan Award and the Decimal Digits of Powers of 2
    WHOA … I’ve won the inaugural Luca Trevisan Award for Expository Work in Theoretical Computer Science! This has a particular meaning for me as someone who knew Luca Trevisan as well as I did for 25 years — who had him as a professor and thesis committee member, whose blog bounced off his blog, who […]...
    Shtetl-Optimized | 2 days ago
  • Clarifying the role of the behavioral selection model
    This is a brief elaboration on The behavioral selection model for predicting AI motivations, based on some feedback and thoughts I’ve had since publishing. Written quickly in a personal capacity. The main focus of this post is clarifying the basic machinery of the behavioral selection model, and conveying why it matters to disambiguate between different “motivations” for AI behavior.
    AI Alignment Forum | 2 days ago
  • “Reflections on Anthropic and EA” by abrahamrowe
    LLM disclosure: I wrote this post myself, then asked an LLM to copy-edit it before posting. I manually made any edits I liked and copy-pasted no text from the LLM (my current practice for using LLMs in writing that I care about). This is crossposted from my blog. These are personal reflections on feelings that I’ve been sitting with recently.
    Effective Altruism Forum Podcast | 2 days ago
  • The Darwinian Honeymoon - Why I am not as impressed by human progress as I used to be
    Crossposted from Substack and the EA Forum. . A common argument for optimism about the future is that living conditions have improved a lot in the past few hundred years, billions of people have been lifted out of poverty, and so on. It’s a very strong, grounding piece of evidence - probably the best we have in figuring out what our foundational beliefs about the world should be.
    LessWrong | 2 days ago
  • Digital minds governance: early scoping from expert interviews
    Notes from 29 interviews with researchers, philosophers, lawyers, and policy experts on what the new field should do.
    Outpaced | 2 days ago
  • Reflections on Anthropic and EA
    Personal reflections I've been sitting on lately.
    Good Structures | 2 days ago
  • Expand Your Moral Circle
    Why every sentient being matters
    Bentham's Newsletter | 2 days ago
  • AI is fast, precarious, self-amplifying, and complicated
    Rational Animations | 2 days ago
  • “I’m disgusted to be a human”: What to do when you hate your own species
    Your Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. It’s based on value pluralism — the idea that each of us has multiple values that are equally valid but that often conflict with each other. To submit a question, fill out this anonymous form. Here’s this week’s question from […]...
    Future Perfect | 2 days ago
  • International Law Cannot Prevent Extinction Either
    The context for this post is primarily Only Law Can Prevent Extinction, but after first drafting a half-assed comment, I decided to get off my ass and write a whole-assed post. I agree with Eliezer's main thesis that individual violence against AI researchers is both morally wrong and strategically stupid. Where I disagree is with the claim that international law can prevent extinction.
    LessWrong | 2 days ago
  • Cryonie, pDoom, Simulation, Super IA … FAQ 80k abonnés !
    Dans cet épisode du Podcast La Prospective, Gaëtan Selle de The Flares répond aux questions posées par les abonnés à l’occasion du passage des 80 000 abonnés sur YouTube. ⬇️⬇️⬇️ Infos complémentaires : sources, références, liens... ⬇️⬇️⬇️ Dans cet épisode du Podcast La Prospective, Gaëtan Selle de The Flares répond aux questions posées par les abonnés à l’occasion du passage des 80 000...
    The Flares | 2 days ago
  • Neural Networks learn Bloom Filters
    Overview: We train a tiny ReLU network to output sparse top- distributions over a vocabulary much larger than its residual dimension. The trained network seems to converge to a mechanism closely resembling a Bloom filter: tokens are assigned sparse binary hashes, the hidden layer computes an approximate union indicator, and the output logits are linearly read from this union.
    LessWrong | 2 days ago
  • The AI Industrial Explosion — Part 2: Transition Dynamics
    How fast could an AI-automated economy actually start growing? Today's economy doesn't produce enough of the stuff that makes stuff. Restructuring takes a few years — but then the second doubling comes in half the time, and the economy is many times its current size within a decade.
    Defenses in Depth | 2 days ago
  • If digital computers are conscious, they are conscious at the hardware level
    Contemporary debate over the moral patienthood of digital minds misses the forest for the trees. Mainstream opinion is divided into physicalist and computationalist camps, who believe that consciousness is substrate dependent and substrate independent, respectively. For this reason, those on the physicalist side frequently make the claim that digital computers will never be conscious.
    LessWrong | 3 days ago
  • 10 big projects for reducing bio x-risk
    The field of people working on reducing bio x-risk is distressingly small. I sketch out 10 big, urgent projects I'd be excited for new people to come work on and own.
    Defenses in Depth | 3 days ago
  • Religious People Shouldn't Deny AI Consciousness
    The common assumption that they should is completely wrong
    Bentham's Newsletter | 3 days ago
  • The Psychology of Authority
    No state has genuine authority.
    Fake Nous | 3 days ago
  • A benchmark is a sensor
    The simple mental picture. A simple mental picture we have for an AI capability benchmark is to think of it as a sensor with a certain sensitivity within a certain range of capabilities. The sensitivity of a benchmark, i.e. it's ability to distinguish the capability of different models, is given by a curve like this:
    LessWrong | 3 days ago
  • The surprisingly strong case for feeling great about your coffee habit
    There are few news subjects more reliably depressing than nutritional science. A glance at the headlines will tell you that sugar is bad for you, red meat is bad for you, and alcohol is really, really bad for you. The message seems to be that if a food or drink gives you even an iota […]...
    Future Perfect | 3 days ago
  • Bad Problems Don't Stop Being Bad Because Somebody's Wrong About Fault Analysis
    Here's a dynamic I’ve seen at least a dozen times: Alice: Man that article has a very inaccurate/misleading/horrifying headline. Bob: Did you know, *actually* article writers don't write their own headlines?. …. But what I care about is the misleading headline, not your org chart. Another example I’ve encountered recently is (anonymizing) when a friend complained about a prosaic safety...
    LessWrong | 3 days ago
  • The Epoch Brief - May 8, 2026
    AI chip supply chain bottlenecks, smuggling to China, benchmark saturation, revenue efficiency at AI companies, and more
    Epoch Newsletter | 3 days ago
  • This is why AI is scary and dangerous.
    Drop 10,000 humans naked in the savannah and we'll bootstrap our way to nuclear weapons. That's the capability AI labs are racing to automate, with no idea what they're building. MIRI President Nate Soares at Harvard on why we only get one shot at this. Comment "danger" to get access to the full video.
    Machine Intelligence Research Institute | 3 days ago
  • Yoshua Bengio thinks he knows how to build safe superintelligence
    By Robert Wiblin | Watch on Youtube | Listen on Spotify | Read transcript. Episode summary. I want my children to live in a world where they will have a future and there will be a democracy for them to live in. Even a 1% chance of something going really, really bad is not acceptable to me.
    Effective Altruism Forum | 4 days ago
  • Write Cause You Have Something to Say
    The ones who are most successful at writeathons (Inkhaven, NaNoWriMo) are those with an overhang of things to say, usually in the form of: draft posts. daydreams. When Scott Alexander said: Whenever I see a new person who blogs every day, it's very rare that that never goes anywhere or they don't get good.
    LessWrong | 4 days ago
  • AI is Breaking Two Vulnerability Cultures
    A week ago the Copy Fail vulnerability came out, and Hyunwoo Kim immediately realized that the fixes were insufficient, sharing a patch the same day. In doing this he followed standard procedure for Linux, especially within networking: share the security impact with a closed list of Linux security engineers, while fixing the bug quietly and efficiently in the open.
    LessWrong | 4 days ago
  • Coefficient Giving is hiring grantmakers and senior generalists across our Global Catastrophic Risks teams
    TL;DR: Coefficient Giving is running a major hiring round for 10+ grantmakers and senior generalists across five Global Catastrophic Risks (GCR) teams. We're allocating around $1 billion in 2026 across AI safety and catastrophic biorisk, and we’re acutely capacity-constrained. Apply here by May 17. Why we’re hiring.
    Effective Altruism Forum | 4 days ago
  • Changelog 5/8: Shop Improvements, Silicon Rewards & More
    Check out our recent site updates!
    Manifold Markets | 4 days ago
  • Suburban Apartment Bans May Be Making Poorer Neighborhoods’ Rents Increase
    When suburbs block apartments, rents in nearby poor neighborhoods may rise by about $27 a month, according to a new national study. Most research on exclusionary zoning has focused on costs within the communities that adopt it; this study finds…. The post Suburban Apartment Bans May Be Making Poorer Neighborhoods’ <span class="dewidow">Rents Increase</span> appeared first on California YIMBY.
    California YIMBY | 4 days ago
  • How the Northwest’s Wildfire Crisis is a Sprawl Crisis
    Wildfire hazard zones across the Pacific Northwest are expanding — and according to Sightline Institute, so is the public cost. Nearly 1.6 million residents lived in high-risk areas in 2023, up 8 percent since 2018, with population growing fastest in…. The post How the Northwest’s Wildfire Crisis is a <span class="dewidow">Sprawl Crisis</span> appeared first on California YIMBY.
    California YIMBY | 4 days ago
  • Objections to effective altruism
    A discussion with Bentham's Bulldog
    Good Thoughts | 4 days ago
  • Is ProgramBench Impossible?
    ProgramBench is a new coding benchmark that all frontier models spectacularly fail. We’ve been on a quest for “hard benchmarks” for a while so it’s refreshing to see a benchmark where top models do badly. Unfortunately, ProgramBench has one big problem: it’s impossible!. What is ProgramBench?. ProgramBench tests if a model can recreate a program from a “clean room” environment.
    LessWrong | 4 days ago
  • 80,000 Hours is hiring a lot right now — come join us!
    This forum post was first drafted using an LLM to summarise information from human-written job postings and was then edited/adjusted by hiring managers. The primary author/coordinator is Arden Koehler. Overview. 80,000 Hours has eight open positions across our advising, operations, video, and web teams, plus three expressions of interest open for video and operations roles. We're trying to...
    Effective Altruism Forum | 4 days ago
  • Richard Yetter Chappell and I Discuss Effective Altruism
    And explain why the main objections don't work
    Bentham's Newsletter | 4 days ago
  • David Reich – Why the Bronze Age was an inflection point in human evolution
    "Instead of being quiescent, natural selection is everywhere."
    The Lunar Society | 4 days ago
  • Dan Hendrycks' Moral Theory Is Very Implausible
    Does the supreme principle of morality say that you matter 360 billion times more than foreign strangers?
    Bentham's Newsletter | 4 days ago
  • The Four Curses of Nuclear Reactors (and AI)
    Rational Animations | 4 days ago
  • How Silicon Valley sold Washington an AI race
    “Who and what agendas does rivalry serve?”...
    Transformer | 4 days ago
  • Cage-Free Hotel Pledges Mean Little Without Strong Regulation
    Global hotel chains are falling short on cage-free egg sourcing, suggesting that regulation, not corporate promises, may be the real driver of progress for hens. The post Cage-Free Hotel Pledges Mean Little Without Strong Regulation appeared first on Faunalytics.
    Faunalytics | 4 days ago
  • Bringing More Expertise to Bear on Alignment
    Preamble. The preamble is less useful for the typical AlignmentForum/LessWrong reader, who may want to skip to Adversaria vs Basinland section. On 28th of October 2025, Geoffrey Irving, Chief Scientist of the UK AI Security Institute, gave a keynote talk (slides) at the Alignment Conference.
    LessWrong | 4 days ago
  • What is local government good for?
    Episode 16 is about building data centers, school districts and redistribution
    The Works in Progress Newsletter | 4 days ago
  • Enhancing Discoverability: Recent Updates to the OSF
    Lifecycle Open Science (LOS) is an approach to research that promotes transparency, openness, and accessibility across the entire research lifecycle—from planning and data collection through analysis, publication, and reuse—by making research outputs and processes more interoperable, machine-readable, and actionable across systems.
    Center for Open Science | 4 days ago
  • More articles we would like to commission
    Write for Works in Progress.
    The Works in Progress Newsletter | 4 days ago
  • AI Worker Power is Near Its Peak. They’re Finally Starting To Use It.
    Google DeepMind UK employees voted to unionize, but not for higher pay.
    Garrison's Substack | 4 days ago
  • Anders Sandberg | AI & Leviathan @ Vision Weekend USA 2025
    This talk was recorded live at Vision Weekend USA, held December 5–7, 2025 in the Bay Area. Vision Weekends are our flagship conference series, bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontier science and technology and to imagine paths toward flourishing futures. Hosted on Acast. See acast.com/privacy for more information.
    The Foresight Institute Podcast | 4 days ago
  • Some Newsletter
    thresholds of goodness
    Atoms vs Bits | 4 days ago
  • The old tech that could help stop the next airborne pandemic
    It’s hard to imagine modern life without glycols. They are used in cosmetics, fog machines, and food. As you read this, you’re almost certainly wearing or drinking from something they were used to produce — polyester fabric or plastic bottles, for example. If you brush your teeth with toothpaste or top your salad with bottled […]...
    Future Perfect | 4 days ago
  • Elon Musk could lose his case against OpenAI — and still get what he wants
    So, what’s a guy got to do to become a billionaire around here? Greg Brockman scribbled the question in his diary, recently unsealed as trial evidence, just two years after co-founding OpenAI as a charity in 2015: “Financially, what will take me to $1B?” For Brockman, now OpenAI’s president, the answer was a yearslong restructuring […]...
    Future Perfect | 4 days ago
  • Three Model Organisms For Taste
    Astral Codex Ten | 4 days ago
  • Strengthening County Financing for Sustainable Community Health Systems in Kenya
    The post Strengthening County Financing for Sustainable Community Health Systems in Kenya appeared first on Living Goods.
    Living Goods | 4 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Charles Dillon
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Florian Jehn | Existential Crunch
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Thomas Moynihan
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Stefan Schubert | The Update
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The Digital Minds Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.