Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Don't Mock Yourself
    About half a year ago, I decided to try stop insulting myself for two weeks. No more self-deprecating humour, calling myself a fool, or thinking I'm pathetic. Why? Because it felt vaguely corrosive. Let me tell you how it went. Spoiler: it went well. The first thing I noticed was how often I caught myself about to insult myself. It happened like multiple times an hour.
    LessWrong | 40 minutes ago
  • National security professionals again found not to be calibrated, Swift Centre on UK winter blackout, adj.news' US Political Future Index || Forecasting newsletter #10/2025
    Highlights
    Forecasting Newsletter | 5 hours ago
  • How long do AI companies have to achieve significant capability gains before funding collapses?
    It’s an open secret that essentially all major AI companies are burning cash and running at massive losses. If progress is slow enough such that it requires X years of continued funding to achieve AI capabilities at least useful enough to produce a net ROI, at what value of X will the economics collapse, resulting in a major downscaling or total collapse of these companies?. Discuss...
    LessWrong | 10 hours ago
  • Chat About God and Miracles With Both Sides Brigade
    Here's my blog https://benthams.substack.com/ Here's Both Sides Brigade's https://bothsidesbrigade.substack.com/ 🎙️ New to streaming or looking to level up? Check out StreamYard and get $10 discount! 😍 https://streamyard.com/pal/d/6425383223689216
    Deliberation Under Ideal Conditions | 10 hours ago
  • On The Fermi Paradox
    Why the filter isn't super early or late
    Bentham's Newsletter | 18 hours ago
  • Expelliarmus! How to enjoy Harry Potter while disarming J.K. Rowling.
    Your Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. It’s based on value pluralism — the idea that each of us has multiple values that are equally valid but that often conflict with each other. To submit a question, fill out this anonymous form. Here’s this week’s question from a […]...
    Future Perfect | 19 hours ago
  • Bloomberg Businessweek: "How Generations of Selective Breeding Created Miserable Chickens"
    I’m a journalist covering animal suffering in agriculture. Yesterday, Bloomberg Businessweek published a story from "Chicks on Speed: Big Chicken's Push for Faster Birds, But Slower Reform", a cross-border investigation I’ve been working on with five other European journalists: Julia Dauksza, Tracy Keeling, Wojciech Oleksiak, Andrei Petre and Paul Tullis.
    Effective Altruism Forum | 21 hours ago
  • Scenes, cliques and teams
    How I make sense of group of people
    Seeking To Be Jolly | 23 hours ago
  • Emil the Moose
    The travels of Emil the Moose since he entered Czechia in mid-June.Moose became extinct in most of Germany around 1000 CE, and in Bohemia, Moravia, Austria, most of southern Poland, and Hungary by the XV. century. It’s not clear where exactly Emil comes from, but most likely from Poland, which has a large moose population in the northeast.
    LessWrong | 1 days ago
  • Experiments With Sonnet 4.5's Fiction
    I have been having fun writing fiction, and plan to spend whatever time I have left being better than LLMs doing it. I thought I had maybe a year. My initial experiments with Sonnet 4.5 didn't give me a good opinion of its writing ability. This morning, I put everything I have written into its context window and then gave it this prompt:
    LessWrong | 1 days ago
  • The Most Common Bad Argument In These Parts
    I've noticed an antipattern. It's definitely on the dark pareto-frontier of "bad argument" and "I see it all the time amongst smart people". I'm confident it's the worst, common argument I see amongst rationalists and EAs. I don't normally crosspost to the EA forum, but I'm doing it now.
    LessWrong | 2 days ago
  • The 5 Obstacles I Had to Overcome to Become Vegan
    Today, I became vegan. Just 24 hours ago, I couldn’t have imagined this would be the case — at least not so soon. Reading Óscar Horta’s Making A Stand For Animals (MASFA, from now on) hit me like a freight train as I turned page after page, chapter after chapter.
    Effective Altruism Forum | 2 days ago
  • New commitments in Peru
    Hello FAST!. October brings very good news, new cage-free announcements in Peru!. 1- Tentaciones by Ale Melly. A fine pastry shop, one of Lima's most important, it has four locations and a strong presence in several districts, as well as event catering. They are mission is focused on producing high-quality pastry products.
    Animal Advocacy Forum | 2 days ago
  • Good progress on transport regulation in Peru
    Hola FAST members. . Since Peruvian regulations governing the land transport of farm animals do not contemplate or require animal welfare, a few weeks ago, ARBA submitted a proposal to the Ministry of Agriculture, MIDAGRI and SENASA to fill this legal gap, incorporating animal welfare parameters as a condition for the land transport of such animals.
    Animal Advocacy Forum | 2 days ago
  • Why Do You Bristle at Shock Collars if You Eat Meat?
    The factory farm is infinitely crueler than shocking a dog
    Bentham's Newsletter | 2 days ago
  • The Wages of Sin
    In college once, I had a disagreement with an anthropology professor about whether crime pays.
    Fake Nous | 2 days ago
  • How to catch AI sleeper agents with a simple interpretability trick
    #ai #aisafety #aialignment #animation #existentialrisk #artificialintelligence #anthropic #anthropicai
    Rational Animations | 2 days ago
  • The world is producing more food crops than ever before
    If you ever find yourself in Battery Park City in Lower Manhattan, turn down Vesey Street toward North End Avenue. You’ll arrive at something unusual: a collection of stones, soil and moss, artfully arranged to look over the Hudson River. It’s the Irish Hunger Memorial, a piece of public artwork that commemorates the devastating Irish […]...
    Future Perfect | 2 days ago
  • Community Notes require a Community
    How we used a novel analysis to understand what causes people to quit the widely adopted content-moderation system
    Reasonable People | 2 days ago
  • L’Alignement de l’IA : Un Défi d’Ingénierie au Stade de l’Alchimie – avec Gabriel Alfour
    ⚠️ Découvrez du contenu EXCLUSIF (pas sur la chaîne) ⚠️ ⇒ https://the-flares.com/y/bonus/ ⬇️⬇️⬇️ Infos complémentaires : sources, références, liens... ⬇️⬇️⬇️ Le contenu vous intéresse ? Abonnez-vous et cliquez sur la 🔔 Vous avez aimé cette vidéo ? Pensez à mettre un 👍 et à la partager.
    The Flares | 2 days ago
  • You can raise $3 for The Humane League just by clicking some buttons: Tab for Ending Animal Suffering
    TLDR: Through the end of October, we are giving $3 (Up to $5k total) to The Humane League for each new person who tries out Tab for Ending Animal Suffering. It is a free browser extension that uses a few ads on your new tab page to raise money for non-profits.
    Effective Altruism Forum | 2 days ago
  • Tallinn
    That’s why you can never trust a good person, for he will freely do evil - purely for justice’s sake, so that everyone may be the same [miserable]. – AH Tammsaare (1926) . Estonia. A dark, tiny, angry, improbably stylish place where Tarkovsky filmed his undying masterpiece ‘Stalker’ and Nolan also tried to do something with ‘Tenet’. – Robert Kurvitz .
    argmin gravitas | 2 days ago
  • Building an Impact-focused Community
    Acknowledgements: A huge thank you to the Hive team and the many community builders who have shared their wisdom with us over the years. This post is an attempt to synthesize those lessons. Special thanks to Therese Veith, Gergő Gáspár, Sam Chapman, Sarah Tegeler, and John Salter for reviewing this post. All mistakes and oversights are our own. TL;DR:
    Effective Altruism Forum | 2 days ago
  • Iterated Development and Study of Schemers (IDSS)
    In a previous post, we discussed prospects for studying scheming using natural examples. In this post, we'll describe a more detailed proposal for iteratively constructing scheming models, techniques for detecting scheming, and techniques for preventing scheming. We'll call this strategy Iterated Development and Study of Schemers (IDSS).
    LessWrong | 2 days ago
  • Ambitious Impact (AIM) is hiring for Research and Recruitment Managers
    About AIM. Ambitious Impact (AIM), formerly Charity Entrepreneurship, launches organizations that cost-effectively improve human and animal lives at scale. Since 2018, we’ve incubated over 50 charities, now estimated to improve the lives of more than 75 million people and 1 billion animals worldwide.
    Effective Altruism Forum | 2 days ago
  • Iterated Development and Study of Schemers (IDSS)
    In a previous post, we discussed prospects for studying scheming using natural examples. In this post, we'll describe a more detailed proposal for iteratively constructing scheming models, techniques for detecting scheming, and techniques for preventing scheming. We'll call this strategy Iterated Development and Study of Schemers (IDSS).
    AI Alignment Forum | 2 days ago
  • Interview with a drone expert on the future of AI warfare
    A conversation with Paul Scharre, author of Four Battlegrounds: Power in the Age of Artificial Intelligence, who joins us to talk about. how AI’s superhuman command and control abilities will change the battlefield. why offense/defense balance isn’t a well-defined concept. “race to the bottom” dynamics for autonomous weapons. how a US/taiwan conlict in the age of drones might play out.
    Effective Altruism Forum | 3 days ago
  • California Legalizes “Duplexes In My Historic Back Yard”
    Local Abuse of Historic Preservation Rules Leads to Reform “We can build more homes and also preserve historic neighborhoods” SACRAMENTO – Today, California took a major step toward ending the abuse of historic preservation laws to block urgently-needed new housing,….
    California YIMBY | 3 days ago
  • California Gets “Shot Clock” for Housing Inspections
    Law signed by Gov. Newsom Will Speed California Families Into New Homes “Californians need housing now – not when inspectors get around to it“ SACRAMENTO – California families will soon be able to move into new homes faster, thanks to…. The post California Gets “Shot Clock” for <span class="dewidow">Housing Inspections</span> appeared first on California YIMBY.
    California YIMBY | 3 days ago
  • California Law Makes it Easier to Build Small, In-Home ADUs
    New Law Signed by Gov. Gavin Newsom Removes Barriers, Imposes Standards “It’s now easier than ever to build a home inside your home” SACRAMENTO – Californians will find it faster, cheaper, and easier to add small accessory dwelling units (“ADUs”)…. The post California Law Makes it Easier to Build Small, <span class="dewidow">In-Home ADUs</span> appeared first on California YIMBY.
    California YIMBY | 3 days ago
  • California Housing Guidelines to be Translated into Multiple Languages
    Bill Signed by Gov. Newsom Reflects Diverse, Multilingual Populace “California is for everyone – our housing guidelines should be translated to reflect our diversity” SACRAMENTO – Californians who speak a language other than English at home will have an easier….
    California YIMBY | 3 days ago
  • New California Law to Issue Housing Permits in 30 Days
    Local Permitting Delays Often Took Months; “Shot Clock” Sets a Time Limit “We’re reducing permitting times from many months to four weeks” SACRAMENTO – California home builders will be guaranteed faster permitting processes for new homes, thanks to new legislation…. The post New California Law to Issue Housing Permits in <span class="dewidow">30 Days</span> appeared first on California YIMBY.
    California YIMBY | 3 days ago
  • New Law Ends NIMBY Abuse of ADU Permitting
    “Final Boss” Bill Voids Local Regulations Designed to Ban Accessory Dwelling Units “Californians want to build ADUs. Now, local jurisdictions have to let them.” SACRAMENTO – Homeowners who seek to build accessory dwelling units (“ADUs”) will now have the full…. The post New Law Ends NIMBY Abuse of <span class="dewidow">ADU Permitting</span> appeared first on California YIMBY.
    California YIMBY | 3 days ago
  • Governor Newsom Signs Historic Housing Legislation
    SB 79 Culminates Eight-Year Fight to Legalize Homes Near Transit “This Governor has cemented his legacy as a pro-housing leader” SACRAMENTO — Today California Governor Gavin Newsom signed into law Senate Bill 79, a bill that will make it legal…. The post Governor Newsom Signs Historic <span class="dewidow">Housing Legislation</span> appeared first on California YIMBY.
    California YIMBY | 3 days ago
  • Training fails to elicit subtle reasoning in current language models
    While recent AI systems achieve strong performance through human-readable reasoning that should be simple to monitor (OpenAI, 2024, Anthropic, 2025), we investigate whether models can learn to reason about malicious side tasks while making that reasoning appear benign.
    LessWrong | 3 days ago
  • Nick Lane – Life as we know it is chemically inevitable
    Life is continuous with Earth’s geochemistry...
    The Lunar Society | 3 days ago
  • Why I Support Tax Cuts For The Rich
    The arc of history bends towards Glennism
    Bentham's Newsletter | 3 days ago
  • New faces, same mission: Crustacean Compassion response to cabinet reshuffle
    A sweeping Cabinet reshuffle has brought new leadership to key departments responsible for decapod welfare. Crustacean Compassion welcomes these new Ministers and urges swift action to protect decapod crustaceans.
    Crustacean Compassion | 3 days ago
  • We’re all behind The Curve
    Transformer Weekly: GAIN AI Act, China’s rare earth crackdown, and AI bubble talk...
    Transformer | 3 days ago
  • Hugo Nominees That Are Worth Reading (2025)
    Poem A War of Words: Probably my favorite poem qua poem this year.
    Thing of Things | 3 days ago
  • Social Media Is Fueling Harmful Wild Animal Tourism
    Researchers take a look at how social media shapes public attitudes towards wild animal interactions, and why it matters for animal welfare. The post Social Media Is Fueling Harmful Wild Animal Tourism appeared first on Faunalytics.
    Faunalytics | 3 days ago
  • Resisting the Hierarchy of Evidence: Philanthropic Foundations and the Rise of RCTs
    Editors’ Note: Nicole P. Marwell and Jennifer E. Mosley discuss their new book, Mismeasuring Impact: How Randomized Controlled Trials Threaten the Nonprofit Sector (Stanford University Press, 2025). Recent scholarship has offered varying interpretations of what the appropriate function of foundations should be within a democracy.
    HistPhil | 3 days ago
  • Good tokens 2025-10-10
    What we know and what we don’t
    Atoms vs Bits | 3 days ago
  • Fascism Can't Mean Both A Specific Ideology And A Legitimate Target
    Astral Codex Ten | 3 days ago
  • Mariners at the Dawn of History
    Archaeological finds hundreds of thousands of years old have shown human settlement of many of the world’s remote islands, challenging our assumptions of a primitive prehistory. The post Mariners at the Dawn of History appeared first on Palladium.
    Palladium Magazine Newsletter | 3 days ago
  • Our top takeaways from the Bloom Mental Health Report
    With research by us at the Happier Lives Institute, Bloom Wellbeing Fund has recently released a new report providing an up to date overview of the problem of mental illness, and the best ways to solve it. Here are our top takeaways from the report, which are an edited version of the summary shown in the report.
    Happier Lives Institute | 3 days ago
  • “Effective altruism in the age of AGI” by William_MacAskill
    This post is based on a memo I wrote for this year's Meta Coordination Forum. See also Arden Koehler's recent post, which hits a lot of similar notes. Summary. The EA movement stands at a crossroads.
    Effective Altruism Forum Podcast | 3 days ago
  • This experiment could end all life. Or it won’t. Should we try it?
    It could revolutionize human health — or it could spell our doom. It really depends on who you ask. I’m not talking about potentially risky biodefense lab research, but something that doesn’t yet exist: mirror life. Here’s a refresher on normal biology: The cells in our bodies are composed of the building blocks of life. […]...
    Future Perfect | 3 days ago
  • Iterated Development and Study of Schemers (IDSS)
    A strategy for handling scheming
    Redwood Research | 3 days ago
  • "Yes, and—" Requires the Possibility of "No, Because—"
    Scott Garrabrant gives a number of examples to illustrate that "Yes Requires the Possibility of No". We can understand the principle in terms of information theory. Consider the answer to a yes-or-no question as a binary random variable.
    LessWrong | 3 days ago
  • We’re scaling our job board to help people find impactful roles
    We think there are many impactful roles out there that aren’t sufficiently on people’s radar, to the detriment of both people looking to get hired and orgs looking to hire them. Because of that, we’ve been working to significantly scale and improve our job board, especially in the context of underrepresented opportunities (across cause areas, regions, and orgs that we think are currently not...
    Effective Altruism Forum | 3 days ago
  • New $10k Prize, Yudkowsky vs Miller on ASI Risks & Two Hopeful AI Scenarios
    Vision Weekend USA 2025 | Dec 5-7
    Foresight Institute | 3 days ago
  • Assuring Agent Safety Evaluations By Analysing Transcripts
    Summary: This is a research update from the Science of Evaluation team at the UK AI Security Institute. In this update, we share preliminary results from analysing transcripts of agent activity that may be of interest to researchers working in the field. AISI generates thousands of transcripts when running its automated safety evaluations, e.g.
    AI Alignment Forum | 3 days ago
  • Stars are a rounding error
    Notes on some interesting factoids I learnt from Anders Sandberg's draft book, Grand Futures. "Starlight is heavier than worlds" - Anders Sandberg. . Looking at the energy density of stuff in the universe, we find a few surprising, and not so surprising, facts. First, the obvious: baryonic matter itself is a rounding error, contributing 4.5% of the energy of the universe.
    LessWrong | 3 days ago
  • Training Qwen-1.5B with a CoT legibility penalty
    I tried training Qwen2.5-1.5B with RL on math to both get correct answers and have a CoT that doesn’t look like human-understandable math reasoning. RL sometimes succeeds at hacking my monitor, and when I strengthen my monitor, it fails at finding CoT that are both illegible and helpful, even after training for roughly 4000 steps (~1B generated tokens).
    LessWrong | 3 days ago
  • Hospitalization: A Review
    I woke up Friday morning w/ a very sore left shoulder. I tried stretching it, but my left chest hurt too. Isn't pain on one side a sign of a heart attack?. Chest pain, arm/shoulder pain, and my breathing is pretty shallow now that I think about it, but I don't think I'm having a heart attack because that'd be terribly inconvenient.
    LessWrong | 3 days ago
  • The Thinking Machines Tinker API is good news for AI control and security
    Last week, Thinking Machines announced Tinker. It’s an API for running fine-tuning and inference on open-source LLMs that works in a unique way. I think it has some immediate practical implications for AI safety research: I suspect that it will make RL experiments substantially easier, and increase the number of safety papers that involve RL on big models.
    LessWrong | 3 days ago
  • The Thinking Machines Tinker API is good news for AI control and security
    Last week, Thinking Machines announced Tinker. It’s an API for running fine-tuning and inference on open-source LLMs that works in a unique way. I think it has some immediate practical implications for AI safety research: I suspect that it will make RL experiments substantially easier, and increase the number of safety papers that involve RL on big models.
    AI Alignment Forum | 3 days ago
  • At odds with the unavoidable meta-message
    It is a truism known to online moderators that when two commenters are going back and forth in heated exchange, and one lays out rejoinders in paragraph after paragraph of dense text, then two things will have happened: Our careful communicator may or may not have succeeded at conveying well-reasoned insights hitherto unknown by his interlocutor that will change her mind.
    LessWrong | 3 days ago
  • Realistic Reward Hacking Induces Different and Deeper Misalignment
    TL;DR: I made a dataset of realistic harmless reward hacks and fine-tuned GPT-4.1 on it. The resulting models don't show emergent misalignment on the standard evals, but they do alignment fake (unlike models trained on toy reward hacks), seem more competently misaligned, are highly evaluation-aware, and the effects persist when mixing in normal data.
    LessWrong | 3 days ago
  • I take antidepressants. You’re welcome
    It’s amazing how much smarter everyone else gets when I take antidepressants. It makes sense that the drugs work on other people, because there’s nothing in me to fix. I am a perfect and wise arbiter of not only my own behavior but everyone else’s, which is a heavy burden because some of ya’ll are terrible at life. You date the wrong people.
    LessWrong | 3 days ago
  • Figure 03 🤖, Intel's make-or-break chip ⚡, AI bitter lessons 👨‍💻
    TLDR AI | 3 days ago
  • Towards a Typology of Strange LLM Chains-of-Thought
    Intro. LLMs being trained with RLVR (Reinforcement Learning from Verifiable Rewards) start off with a 'chain-of-thought' (CoT) in whatever language the LLM was originally trained on. But after a long period of training, the CoT sometimes starts to look very weird; to resemble no human language; or even to grow completely unintelligible. Why might this happen?.
    LessWrong | 3 days ago
  • Nobel Season
    Above the Fold plays the waiting game
    Manifold Markets | 3 days ago
  • Chinese dams hold billions of people to ransom
    Could desalination make them irrelevant?
    The Works in Progress Newsletter | 4 days ago
  • How AI Will Transform Military Command and Control - Paul Scharre
    Listen now | A conversation with Paul Scharre, author of Four Battlegrounds: Power in the Age of Artificial Intelligence joins us to talk about
    Sentinel | 4 days ago
  • Mercy For Animals secures historic win for plant-based food in the U.S. military
    Beginning in 2027, all vegetarian MREs will be replaced with vegan options WASHINGTON — In a groundbreaking move recently announced by Pentagon News, the U.S. military will replace its four vegetarian MREs (meals ready to eat) with fully plant-based versions in 2027. The change comes after years of advocacy by Mercy For Animals and its […].
    Mercy for Animals | 4 days ago
  • If We Could Grok Heaven, Pascal's Wager Would Seem More Intuitive
    We would aim for heaven if we knew what it was like
    Bentham's Newsletter | 4 days ago
  • The Thinking Machines Tinker API is good news for AI control and security
    It's a promising design for reducing model access inside AI companies.
    Redwood Research | 4 days ago
  • An intense battle over the RAISE Act is entering its final stretch
    New York State's AI bill is more ambitious than California’s SB 53 — and is facing opposition from Andreessen Horowitz and other tech groups...
    Transformer | 4 days ago
  • The RAISE Act can stop the AI industry’s race to the bottom
    Opinion: Assembly Member Alex Bores argues that regulation can prevent market pressure from encouraging the release of dangerous AI models, without harming innovation.
    Transformer | 4 days ago
  • Transforming Research, Together: Shaping the Future of Openness and Rigor
    COS’s 2026–2028 Strategic Planning Process. As the global research system evolves—technologically, politically, and culturally—so must the organizations that support it. At the Center for Open Science (COS), we’re developing a bold and focused strategy for 2026–2028 to meet the moment and our shared future with clarity, collaboration, and impact. This planning process comes at a pivotal time.
    Center for Open Science | 4 days ago
  • Why frontier AI can't solve this professor's math problem - Greta Panova
    Greta Panova wrote a math problem so difficult that today’s most advanced AI models don’t know where to begin.
    Epoch Newsletter | 4 days ago
  • Sightsavers’ Abdulai Dumbuya wins award for inclusive education work
    Abdulai has been recognised at this year’s Presidential National Best Teachers Awards in Sierra Leone for his work to make education systems more inclusive of children with disabilities.
    Sightsavers | 4 days ago
  • If we can’t control MechaHitler, how will we steer AGI?
    The post If we can’t control MechaHitler, how will we steer AGI? appeared first on 80,000 Hours.
    80,000 Hours | 4 days ago
  • Affect entitlement
    Sometimes things are boring
    Contemplatonist | 4 days ago
  • How well can large language models predict the future?
    When will artificial intelligence (AI) match top human forecasters at predicting the future? In a recent podcast episode, Nate Silver predicted 10–15 years. Tyler Cowen disagreed, expecting a 1–2 year timeline. Who’s more likely to be right?.
    Effective Altruism Forum | 4 days ago
  • What to do when every crisis needs your $20
    Cori Jackson — a single mom living in Indiana — took in her two young nieces to keep them out of foster care this summer. It hasn’t been easy. The youngest still isn’t potty-trained. The oldest isn’t used to having food in the fridge so, sometimes, she eats so much it makes her sick. The […]...
    Future Perfect | 4 days ago
  • What to do about near-term cluelessness in animal welfare
    (Context: I’m not an expert in animal welfare. My aim is to sketch a potentially neglected perspective on prioritization, not to give highly reliable object-level advice.). Summary: We seem to be clueless about our long-term impact. We might therefore consider it more robust to focus on neartermist causes, in particular animal welfare.
    Effective Altruism Forum | 4 days ago
  • How AI may become deceitful, sycophantic... and lazy
    Disclaimers: I am a computational physicist, not a machine learning expert: set your expectations of accuracy accordingly. All my text in this post is 100% human-written without AI assistance. Introduction: The threat of human destruction by AI is generally regarded by longtermists as the most important cause facing humanity.
    Effective Altruism Forum | 4 days ago
  • The Relationship Between Social Punishment and Shared Maps
    A punishment is when one agent (the punisher) imposes costs on another (the punished) in order to affect the punished's behavior. In a Society where thieves are predictably imprisoned and lashed, people will predictably steal less than they otherwise would, for fear of being imprisoned and lashed.
    LessWrong | 4 days ago
  • Hiring : Research Assistants at Georgetown University
    gui2de is hiring student RAs at Georgetown University for the academic year 2025-2026.
    Georgetown University Initiative on Innovation, Development and Evaluation | 4 days ago
  • Sobeys Puts Profits Over People, Animals and Promises
    Safeway employees across Alberta are sounding the alarm about Sobeys, their parent label owned by Empire Company Limited. Through its “Truck You, Sobeys” campaign, the United Food and Commercial Workers union accuses Sobeys of cutting delivery routes and reducing full-time jobs to protect profit margins — moves that hurt both workers and customers. This public […].
    Mercy for Animals | 4 days ago
  • Vegan Meals Ready-to-Eat (MREs) Coming to US Military Rations by 2027
    The post Vegan Meals Ready-to-Eat (MREs) Coming to US Military Rations by 2027 appeared first on Mercy For Animals.
    Mercy for Animals | 4 days ago
  • Spooky Collusion at a Distance with Superrational AI
    TLDR: We found that models can coordinate without communication by reasoning that their reasoning is similar across all instances, a behavior known as superrationality. Superrationality is observed in recent powerful models and outperforms classic rationality in strategic games. Current superrational models cooperate more often with AI than with humans, even when both are said to be rational.
    LessWrong | 4 days ago
  • Irresponsible Companies Can Be Made of Responsible Employees
    tl;dr: In terms of financial interests of an AI company, bankruptcy and the world ending are both equally bad. If a company acted in line with its financial interests , it would happily accept significant extinction risk for increased revenue. There are plausible mechanisms which would allow a company to act like this even if virtually every employee would prefer the opposite.
    LessWrong | 4 days ago
  • You Should Get a Reusable Mask
    A pandemic that's substantially worse than COVID-19 is a serious possibility. If one happens, having a good mask could save your life. A high quality reusable mask is only $30 to $60, and I think it's well worth it to buy one for yourself. Worth it enough that I think you should order one now if you don't have one already. But if you're not convinced, let's do some rough estimation.
    LessWrong | 4 days ago
  • Inoculation prompting: Instructing models to misbehave at train-time can improve run-time behavior
    This is a link post for two papers that came out today: Inoculation Prompting: Eliciting traits from LLMs during training can suppress them at test-time (Tan et al.). Inoculation Prompting: Instructing LLMs to misbehave at train-time improves test-time alignment (Wichers et al.).
    LessWrong | 4 days ago
  • Inoculation prompting: Instructing models to misbehave at train-time can improve run-time behavior
    This is a link post for two papers that came out today: Inoculation Prompting: Eliciting traits from LLMs during training can suppress them at test-time (Tan et al.). Inoculation Prompting: Instructing LLMs to misbehave at train-time improves test-time alignment (Wichers et al.).
    AI Alignment Forum | 4 days ago
  • Gemini bundling 📱, Sam Altman Q&A 💬, the React Foundation 👨‍💻
    TLDR AI | 4 days ago
  • Evaluating Gemini 2.5 Deep Think’s math capabilities
    Epoch Blog | 4 days ago
  • Hidden Open Thread 402.5
    Astral Codex Ten | 4 days ago
  • Prediction markets & many experts think authoritarian capture of the US looks distinctly possible
    The following is a quick collection of forecasting markets and opinions from experts which give some sense of how well-informed people are thinking about the state of US democracy. This isn't meant to be a rigorous proof that democracy is under threat (DM me for that), just a collection which I hope will get people thinking about what's happening in the US now.
    Effective Altruism Forum | 4 days ago
  • Experts & prediction markets think authoritarian capture of the US looks distinctly possible
    The following is a quick collection of forecasting markets and opinions from experts which give some sense of how well-informed people are thinking about the state of US democracy. This isn't meant to be a rigorous proof that democracy is under threat (DM me for that), just a collection which I hope will get people thinking about what's happening in the US now.
    Effective Altruism Forum | 4 days ago
  • Experts & markets think authoritarian capture of the US looks distinctly possible
    The following is a quick collection of forecasting markets and opinions from experts which give some sense of how well-informed people are thinking about the state of US democracy. This isn't meant to be a rigorous proof that this is the case (DM me for that), just a collection which I hope will get people thinking about what's happening in the US now.
    Effective Altruism Forum | 4 days ago
  • Can cash accelerate the end of extreme poverty? Taking the next big step in Malawi
    In Malawi, we’re answering a new question: can cash not only transform individual lives but entire communities, accelerating the end of extreme poverty? The evidence is clear that large, unconditional cash transfers help people escape extreme poverty. Now we’re testing how it works at scale and learning how to make it even more effective along the […]...
    GiveDirectly | 4 days ago
  • How Networks Vibrate: From Oscillators to Eigenmodes
    A math and engineering friendly tour of how networks “choose” to vibrate. At the Ekkolapto Polymath Salon @ Frontier Tower in San Francisco, Andrés Gómez Emilsson (QRI Director of Research) presents our program combining bottom-up oscillator simulations with top-down spectral graph theory to reveal a graph’s resonant modes and symmetries.
    Qualia Research Institute | 4 days ago
  • EA Forum Digest #261
    EA Forum Digest #261 Hello!. Draft Amnesty Week starts on Monday! Check out the “What posts would you like someone to write?” thread if you’d like some inspiration. Two weeks left to enter the ‘Essays on Longtermism’ Competition — the top prize is $1000. Also, the application deadline for EAGxSingapore is coming up on October 20. Enjoy the posts! :) .
    EA Forum Digest | 4 days ago
  • I take antidepressants. You’re welcome
    It’s amazing how much smarter everyone else gets when I take antidepressants. It makes sense that the drugs work on other people, because there’s nothing in me to fix. I am a perfect and wise arbiter of not only my own behavior but everyone else’s, which is a heavy burden because some of ya’ll are … Continue reading "I take antidepressants. You’re welcome"...
    Aceso Under Glass | 5 days ago
  • Plans A, B, C, and D for misalignment risk
    Different plans for different levels of political will
    Redwood Research | 5 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.