Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Increasing Precision of Terms Related to Reproducibility and Replicability
    Search OpenAlex for reproducibility studies and you will find many papers. Search for replicability studies. Same. Search for robustness studies. Same. A lot of research has been done on these topics during the past decade.
    Center for Open Science | 3 hours ago
  • The Talent Map - Final Recording
    Center for Security and Emerging Technology | 4 hours ago
  • The Good News Is That One Side Has Definitively Won The Missing Heritability Debate
    Astral Codex Ten | 4 hours ago
  • Of Course You Can Write People Off Without Extensively Reading Their Work
    We do this all the time
    Bentham's Newsletter | 4 hours ago
  • When Is Good Enough Good Enough? AI, Research, And Animal Advocacy
    One of the key promises of artificial intelligence is that it can put a world of data at our fingertips in an instant — so it’s no wonder many animal advocates are eager to use it for research. In this blog, we look at using AI as a research tool and specifically explore the topic of summarization. The post When Is Good Enough Good Enough?
    Faunalytics | 5 hours ago
  • Marius Hobbhahn on the race to solve AI scheming before models go superhuman
    The post Marius Hobbhahn on the race to solve AI scheming before models go superhuman appeared first on 80,000 Hours.
    80,000 Hours | 5 hours ago
  • Faunalytics Index – December 2025
    This month’s Faunalytics Index provides facts and stats about global salmon farm escapes, pro-meat social media campaigns, animal welfare violations in U.S. laboratories, and more. The post Faunalytics Index – December 2025 appeared first on Faunalytics.
    Faunalytics | 5 hours ago
  • How China’s AI diffusion plan could backfire
    Opinion: Scott Singer argues that the country’s plan to embed AI across all facets of society could create huge growth — and accelerate social unrest...
    Transformer | 5 hours ago
  • Convincing People To Stop Eating Meat Isn’t Easy
    A meta-analysis of 41 studies comprising 87,000 participants led researchers to conclude that reducing consumption of meat and animal products is currently an “unsolved problem.”. The post Convincing People To Stop Eating Meat Isn’t Easy appeared first on Faunalytics.
    Faunalytics | 5 hours ago
  • Effective Pizzaism
    I am an effective pizzaist. Sometimes, I want the world to contain more pizza, and when that happens I want as much good pizza as I can get for as little money as I can spend. I am not going anywhere remotely subtle with that analogy, but it's the best way I can think of to express my personal stance. I. What would it mean to be an effective pizzaist?.
    LessWrong | 5 hours ago
  • AI tools risk becoming AI agents
    Future of Life Institute | 5 hours ago
  • Why are companies planning to invest trillions in AI?
    Future of Life Institute | 5 hours ago
  • AI chips can be used to control AI
    Future of Life Institute | 5 hours ago
  • AI risks are everywhere
    Future of Life Institute | 5 hours ago
  • Superintelligence will take power from us
    Future of Life Institute | 5 hours ago
  • Another preemption defeat shows the AI industry is fighting a losing battle
    The second failed attempt to pass federal preemption of state AI laws could have lasting repercussions for the industry
    Transformer | 5 hours ago
  • When will AGI happen?
    Future of Life Institute | 5 hours ago
  • AI CEOs believe AI could destroy humanity
    Future of Life Institute | 5 hours ago
  • EA Forum Digest #269
    EA Forum Digest #269 Hello!. The Donation Election ends on Sunday. At time of writing, the leaderboard is as follows: If you haven’t voted yet, go ahead and do it now! Also — next week is ‘Why I donate’ week on the Forum, so consider writing a post about the reasons behind your donations. Message me on the Forum if you want any feedback/ tips.
    EA Forum Digest | 6 hours ago
  • Where knowledge flows, innovation grows
    Inside GFI’s 2025 Research Grant Program and the newest projects poised to shape the future of alt proteins.
    Good Food Institute | 6 hours ago
  • Why I’m setting up a new free speech campaign (and how you can help!)
    Dear readers,
    Samstack | 6 hours ago
  • Brief Book Thoughts: Justice, Wittgenstein, Phlebas
    Where do books go? Books are where we live.
    Atoms vs Bits | 7 hours ago
  • Nature's drug database
    Millions of years of evolution have given us genomes that are like giant datasets for drug development. Finally, we are learning how to use them.
    The Works in Progress Newsletter | 9 hours ago
  • AI Safety Index: Winter 2025 Edition
    Explore the results of the Winter 2025 AI Safety Index here: futureoflife.org/index A panel of independent experts graded eight leading AI companies (OpenAI, Anthropic, Google DeepMind, xAI, Z.ai, Meta, DeepSeek, and Alibaba Cloud) on their efforts to manage both immediate harms and catastrophic risks from advanced AI systems. How did they do?
    Future of Life Institute | 10 hours ago
  • December Brief | What's really changing in consulting and AI?
    December Brief | What's really changing in consulting and AI? Hiring now: Operations, program management, and global campaigns roles ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌...
    EACN Newsletter | 11 hours ago
  • Stop Applying And Get To Work
    Crossposted from LessWrong. TL;DR: Figure out what needs doing and do it, don't wait on approval from fellowships or jobs. If you.... Have short timelines. Have been struggling to get into a position in AI safety. Are able to self-motivate your efforts. Have a sufficient financial safety net. ... I would recommend changing your personal strategy entirely.
    Effective Altruism Forum | 13 hours ago
  • Soul-Whore
    Zyn met her in a dance club, as she sashayed under slowly pulsing lasers. He matched her pace, drop-swaying with each sidelong flick of her jet black hair. He closed his eyes and let her rhythm guide his steps, finding he liked the gentler flavor it gave the music.
    LessWrong | 18 hours ago
  • Shaping Food Culture Together: Lessons from Jakarta’s Walking Tour
    Shaping Food Culture Together: Lessons from Jakarta’s Walking Tour gloireri Wed, 12/03/2025 - 02:41 . Jakarta moves fast. So do its appetites. Over the past five years, Indonesia’s food landscape has shifted further towards convenience and high-risk options, moving away from diets that are nourishing and environmentally grounded.
    Global Alliance for Improved Nutrition | 19 hours ago
  • OpenAI's code red 🚨, Blue Origin races SpaceX 🚀, AWS re:Invent 👨‍💻
    TLDR AI | 21 hours ago
  • Reward Mismatches in RL Cause Emergent Misalignment
    Learning to do misaligned-coded things anywhere teaches an AI (or a human) to do misaligned-coded things everywhere. So be sure you never, ever teach any mind to do what it sees, in context, as misaligned-coded things.
    LessWrong | 21 hours ago
  • Becoming a Chinese Room
    [My novel, Red Heart, is on sale for $4 this week. Daniel Kokotaijlo liked it a lot, and the Senior White House Policy Advisor on AI is currently reading it.]. “Formal symbol manipulations by themselves … have only a syntax but no semantics.
    LessWrong | 23 hours ago
  • Does Life Coaching Actually Work?
    Is life coaching worth it? Learn what coaching really is, who it helps, and how to spot good coaches—and avoid scams—in an unregulated industry.
    Clearer Thinking | 23 hours ago
  • Thoughts on AI progress (Dec 2025)
    Why I'm moderately bearish in the short term, and explosively bullish in the long term
    The Lunar Society | 23 hours ago
  • Poll: Wild animal vs Farm Animal welfare
    The (new) Center for Wild Animal Welfare, the Wild Animal Initiative, the EA Animal Welfare Fund and Animal Charity Evaluators have all been in the Donation Election's top 3 candidates this week. How should we compare between interventions focused on wild animals, and those focused on farmed animals?.
    Effective Altruism Forum | 1 days ago
  • People May Value Universal Happiness And Reduction Of Suffering More Than They Realize
    I have a number of intrinsic values, but two of my most important intrinsic values are happiness and the lack of suffering for conscious beings. While these are fairly common intrinsic values, I suspect many people actually value them more than they realize. In other words, upon careful reflection, many people would realize that happiness and lack of […]...
    Optimize Everything | 1 days ago
  • “Because of CLTC…” Short Video Series Showcases a Decade of Impact
    Throughout the final months of 2025, CLTC is releasing short 30-second testimonial videos on our LinkedIn page featuring voices from our community — students, alumni, and collaborators —…. The post “Because of CLTC…” Short Video Series Showcases a Decade of Impact appeared first on CLTC.
    Center for Long-Term Cybersecurity | 1 days ago
  • How to quantify global human suffering? Results from OPIS’s pilot survey
    Acknowledgments: Tom Marty, Vanessa Sarre, Clare Diane Harris and Jack Koch contributed to the design of the survey, Jack Koch also carried out outreach to patient support groups, and Magdalena Kolczyńska, Jordan Clist and Michael Smith contributed to the analysis of the raw data.
    Effective Altruism Forum | 1 days ago
  • The Talent Map: How CSET’s PATHWISE Can Guide AI and Cyber Workforce Policy
    Center for Security and Emerging Technology | 1 days ago
  • Nine high-performing charities to consider this Giving Season
    Your climate donations achieve the biggest impact if you follow the recommendations by charity evaluators.
    Effective Environmentalism | 1 days ago
  • Most Arguments For Nature Clearly Prove Too Much
    There are obviously conceivable cases where it's good to destroy nature!
    Bentham's Newsletter | 1 days ago
  • Can AI embrace whistleblowing?
    As Anthropic prepares to publish its whistleblowing policy, can the industry make the most of protecting those who speak out?
    Transformer | 1 days ago
  • How Getting A Puppy Affects Families With Children
    Researchers examined how adding a canine family member shapes children’s and adults’ mental health, revealing both positive experiences and daily challenges. The post How Getting A Puppy Affects Families With Children appeared first on Faunalytics.
    Faunalytics | 1 days ago
  • Pugwash Council statement following Hiroshima Conference
    The image accompanying this post is created by the artist Wakana Yamauchi, one of several original artworks produced for the … More...
    Pugwash Conferences on Science and World Affairs | 1 days ago
  • Want to stay subscribed?
    Want to stay subscribed? View this email in your browser Hello! If you'd like to stay subscribed to the Effective Altruism Newsletter, please click this link. Wondering which of many newsletters we are? We’re the one that sends you articles, podcasts and opportunities relating to the project of effective altruism — doing the most good we...
    Effective Altruism Newsletter | 1 days ago
  • Center on Long-Term Risk: 2025 Plans
    February 2025 By Mia Taylor and Tristan Cook Overview Many promising technical interventions for s-risk reduction are routed through the AI safety community, frontier AI labs, and AI Safety Institutes. In 2025, CLR's primary focus will be on developing expertise and collaborative relationships with these groups.
    Center on Long-Term Risk | 1 days ago
  • MIRI’s 2025 Fundraiser
    MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to improve the conversation about superintelligence and help the world chart a viable path forward. MIRI is a nonprofit with a goal of helping humanity make smart and sober decisions on the topic of...
    Effective Altruism Forum | 1 days ago
  • Giving Tuesday: donations are matched up to $40,000!
    Support the science and technology of the future – donate to Foresight Institute this Giving Tuesday.
    Foresight Institute | 1 days ago
  • 'Double Up Drive 2025' - Very generous matching donations to AMF!
    For information. If you are considering donating to AMF very soon, there is the possibility your donation could be doubled if you make it via the Double Up Drive. AMF has been chosen as one of very few charities and there is a matching pot of US$500,000. Maximum of US$5,000 per donor per charity. There is no limit to how many people can donate to AMF.
    Against Malaria Foundation | 1 days ago
  • Why AI Regulation Needs Teeth and Enforcement
    "They can weigh the costs and benefits and they can say, actually, even though we promise to be transparent about the existence of this model and its capabilities, now that was just a promise." "So it makes a strong case for government regulation, but it also demonstrates the fact that government regulation needs to have like real implementation and enforcement strategies attached to it."...
    Future of Life Institute | 1 days ago
  • Why a 10% AI Catastrophe Risk Demands Transparency
    "You wouldn't get in an airplane with a 10% chance of crashing. You wouldn't cross a bridge with that chance." "And so when people building the technology say, yeah, there's a 10% chance this goes really wrong in a catastrophic way, like no other technology has gone wrong."...
    Future of Life Institute | 1 days ago
  • December 2025
    GiveDirectly, CoGi & the End of Progress
    Global Development & Economic Advancement | 1 days ago
  • MacKenzie Scott’s billion-dollar bet on vibes
    Every time a MacKenzie Scott grantee talks about receiving one of her multimillion-dollar gifts, there is always a hint of the same bashfulness, the same reverence, and the same glee. Their eyes light up. They blush a little. There’s a giggle here and there. “It’s disarming,” said Michael Lomax, head of the United Negro College […]...
    Future Perfect | 1 days ago
  • Why you should donate blood, briefly explained
    Donating money isn’t the only way you can help people. You can also give your blood. Although approximately 62 percent of Americans are eligible to donate blood, only 3 percent do so each year. But someone needs blood every few seconds in the US. While the average red blood cell transfusion is about three units, […]...
    Future Perfect | 1 days ago
  • Useful conversations & resources from our Slack community
    Hive Slack Threads: November
    Hive | 1 days ago
  • How to break free of “money dysmorphia” — and 3 other tips on generosity
    As the writer of an ethical advice column, I get a lot of questions from people who really want to do good in the world but are running into problems. They want to know how to give charity — and how to do it optimally. They want to know if they should be pressuring their parents […]...
    Future Perfect | 1 days ago
  • CLR Fundraiser 2025
    How to donate Donors from the UK can donate tax-deductibly and claim Gift Aid using the form below. Donors from the USA can donate tax-deductibly through the Giving What We Can platform. Donors in Germany, Switzerland, the Czech Republic, and Austria can donate tax-deductibly through Effektiv Spenden platform for donations of more than EUR 20.000.
    Center on Long-Term Risk | 1 days ago
  • Future Proofing Solstice
    Bay Solstice is this weekend (Dec 6th at 7pm, with a Megameetup at Lighthaven on Friday and Saturday). I wanted to give people a bit more idea of what to expect. I created Solstice in 2011. Since 2022, I've been worried that the Solstice isn't really set up to handle "actually looking at human extinction in nearmode" in a psychologically healthy way.
    LessWrong | 2 days ago
  • Help us find founders for new AI safety projects
    In the past 10 years, Coefficient Giving (formerly Open Philanthropy) has funded dozens of projects doing important work related to AI safety / navigating transformative AI. And yet, perhaps most activities that would improve expected outcomes from transformative AI have no significant project pushing them forward, let alone multiple.
    Effective Altruism Forum | 2 days ago
  • AI Safety Newsletter #66: Evaluating Frontier Models, New Gemini and Claude, Preemption is Back
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
    AI Safety Newsletter | 2 days ago
  • A Rosetta Stone for AI benchmarks
    We rely on benchmarks to measure AI capabilities, but even the best benchmarks are just narrow glimpses into what AI can do. Consider a benchmark. If a model is really bad, it will score 0% on the benchmark. But the same is true for a model that’s extremely bad — the benchmark offers no signal to distinguish these two models, even though one is much better than the other. .
    Epoch Blog | 2 days ago
  • DeepSeek returns 🐋, Apple AI chief quits 📱, Neuralink upgrades 🧠
    TLDR AI | 2 days ago
  • A Pragmatic Vision for Interpretability
    Executive Summary. The Google DeepMind mechanistic interpretability team has made a strategic pivot over the past year, from ambitious reverse-engineering to a focus on pragmatic interpretability: Trying to directly solve problems on the critical path to AGI going well [ ] . Carefully choosing problems according to our comparative advantage.
    AI Alignment Forum | 2 days ago
  • A Pragmatic Vision for Interpretability
    Executive Summary. The Google DeepMind mechanistic interpretability team has made a strategic pivot over the past year, from ambitious reverse-engineering to a focus on pragmatic interpretability: Trying to directly solve problems on the critical path to AGI going well [ ] . Carefully choosing problems according to our comparative advantage.
    LessWrong | 2 days ago
  • The 2024 LessWrong Review
    We have a ritual around these parts. Every year, we have ourselves a little argument about the annual LessWrong Review, and whether it's a good use of our time or not. Every year, we decide it passes the cost-benefit analysis . Oh, also, every year, you do the following: Spend 2 weeks nominating the best posts that are at least one year old,.
    LessWrong | 2 days ago
  • Head of Community | GovAI Blog
    Centre for the Governance of AI | 2 days ago
  • Epistemic Humility vs. Tobacco Abolition
    The School for Moral Ambition (SMA) describes itself as a movement of idealists taking on the world’s most pressing problems. Their proposed campaign to 'abolish the tobacco industry altogether' is one of their more ambitious proposals.
    Effective Altruism Forum | 2 days ago
  • Interview: What it's like to be a bat
    For the purposes of this transcript, some high-pitched clicking sounds have been removed. The below is an otherwise unedited transcript of an interview between Dwarkesh Patel and a bat. DWARKESH: Thanks for coming onto the podcast. It’s great to have you—. BAT: Thanks for having me. Yeah. DWARKESH: You can hear me okay? I mean, uh, all the equip—.
    LessWrong | 2 days ago
  • “Announcing the new AIM CEO!” by Ambitious Impact
    We, the AIM Board and outgoing CEO Joey Savoie, are delighted to announce that Samantha Kagel has been selected as AIM's new CEO, effective December 1, 2025. Over the last few months, we have been engaged in a highly important activity: finding AIM's next CEO.
    Effective Altruism Forum Podcast | 2 days ago
  • How Can Interpretability Researchers Help AGI Go Well?
    Executive Summary. Over the past year, the Google DeepMind mechanistic interpretability team has pivoted to a pragmatic approach to interpretability, as detailed in our accompanying post [ ] , and are excited for more in the field to embrace pragmatism!
    LessWrong | 2 days ago
  • Announcing the new AIM CEO!
    We, the AIM Board and outgoing CEO Joey Savoie, are delighted to announce that Samantha Kagel has been selected as AIM's new CEO, effective December 1, 2025. Over the last few months, we have been engaged in a highly important activity: finding AIM’s next CEO.
    Charity Entrepreneurship | 2 days ago
  • Announcing the new AIM CEO!
    We, the AIM Board and outgoing CEO Joey Savoie, are delighted to announce that Samantha Kagel has been selected as AIM's new CEO, effective December 1, 2025. Over the last few months, we have been engaged in a highly important activity: finding AIM’s next CEO.
    Ambitious Impact | 2 days ago
  • How Can Interpretability Researchers Help AGI Go Well?
    Executive Summary. Over the past year, the Google DeepMind mechanistic interpretability team has pivoted to a pragmatic approach to interpretability, as detailed in our accompanying post [ ] , and are excited for more in the field to embrace pragmatism!
    AI Alignment Forum | 2 days ago
  • How middle powers may prevent the development of artificial superintelligence
    In this paper, we make recommendations for how middle powers may band together through a binding international agreement and achieve the goal of preventing the development of ASI, without assuming initial cooperation by superpowers. You can read the paper here: asi-prevention.com.
    LessWrong | 2 days ago
  • Contaminated Milk Recall Highlights Need for a Safer, Plant-Based Food System
    The following statement regarding the Prairie Farms milk recall due to potential cleaning agent contamination may be attributed to Jennifer Behr, Director of Plant-Based Initiatives at Mercy For Animals: “The recall of Prairie Farms Gallon Fat Free Milk, with approximately 320 gallons sold potentially contaminated with food-grade cleaning agents, is a serious public health concern. […].
    Mercy for Animals | 2 days ago
  • You probably already like imprecise probabilities
    "The chance of rain is 0.50496847", said no one
    Jesse’s Substack | 2 days ago
  • Talking With Henry Shevlin About Consciousness
    This was a super fun chat!
    Bentham's Newsletter | 2 days ago
  • Podcast Strategy Doc (December 2025)
    Back to The Lunar Society mission
    The Lunar Society | 2 days ago
  • Open Thread 410
    Astral Codex Ten | 2 days ago
  • 🟩 Airspace closed over Venezuela, Russia-Ukraine talks continue, DeepSeek reaches gold IMO level || Global Risks Weekly Roundup #48/2025
    Executive summary
    Sentinel | 2 days ago
  • My Shrimp Debate With Jeff Maurer
    One third fundraising, one third debate, one third comedy
    Bentham's Newsletter | 2 days ago
  • Helping our non-EA friends give effectively: Update on Karoteno
    TL;DR: We are live: The Karoteno effective giving platform is now operational at karoteno.org. The Goal: A "normie-friendly" donation experience for Spanish speakers who find traditional EA analysis alienating. Action: Share this with your non-EA friends! We are also running a donation-matching campaign this December—please reach out if you’d like to help fund the matching pool. Introduction.
    Effective Altruism Forum | 2 days ago
  • Carnivorous Aquaculture Threatens Wild Fishes And Ecosystems
    The demand for wild fishes as feed for European aquaculture operations is projected to increase by 70% by 2040, highlighting immense risks to marine ecosystems. The post Carnivorous Aquaculture Threatens Wild Fishes And Ecosystems appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Subagents for Shrimp
    and other good causes
    Good Thoughts | 2 days ago
  • No, AI hasn’t run out of data
    AI models’ relationship with our data is getting more dynamic, contextual and private—and the stakes are high The Claim Earlier this year, Elon Musk claimed that ‘all human data for AI training has been exhausted’. Ilya Sutskever, a co-founder of OpenAI, has said the world has reached ‘peak data’. A recent episode of the BBC’s […].
    Open Mined | 2 days ago
  • GWWC's 2025 evaluations of evaluators
    The Giving What We Can research team is excited to share the results of our 2025 round of evaluations of charity evaluators and grantmakers! . In this round, we completed two evaluations that will inform our donation recommendations for the 2025 giving season.
    Effective Altruism Forum | 2 days ago
  • Is This Anything? 23
    Attempting attempts
    Atoms vs Bits | 2 days ago
  • How AI Companies React to Whistleblower Protections
    "That's an adversarial process to an extent, if that's the mindset of the company." "We know there was pushback on SB53, the whistleblower protection side." "Companies were not happy to have that as broad as possible."
    Future of Life Institute | 2 days ago
  • Whistleblower Trust in Government Is Extremely Low
    "if you violate the California state whistleblower protection provisions, you have to pay a fine of $10,000 as a company, which is probably like a five-second burn for most companies these days. Not a terrible deterrent." "how strong is your trust in government to understand and act well on concerns that are brought to you, and that was extremely low."...
    Future of Life Institute | 2 days ago
  • Portfolio Lead
    Portfolio Lead admin_inox Mon, 12/01/2025 - 14:00 vacancy_id SYS-1301 location Kigali, Rwanda Contract type Fixed Term Duration 24 Months Frontend apply URL https://jobs.gainhealth.org/vacancies/1301/apply/ Closing date Mon, 12/08/2025 - 12:00 Department Programmes Partnerships about_the_role <p>The Global Alliance for Improved Nutrition (GAIN) is seeking a Portfolio Lead...
    Global Alliance for Improved Nutrition | 2 days ago
  • The EU AI Act Newsletter #91: Whistleblower Tool Launch
    The European Commission has launched a whistleblower tool for reporting suspected breaches of the AI Act directly to the EU AI Office.
    The EU AI Act Newsletter | 2 days ago
  • Boletín de diciembre de 2025
    🚀 Las últimas novedades de la comunidad de AE...
    Altruismo eficaz | 2 days ago
  • The end of malaria
    I wasn’t always a boring newsroom-bound editor. Back in my days as a Time magazine foreign correspondent, I used to fly to far-flung places, recorder and notebook in hand. That’s how, in the summer of 2005, I found myself in Mae Sot, a small city in Thailand near the border with Myanmar, tasked with contributing […]...
    Future Perfect | 2 days ago
  • Your blood could save up to three lives this Giving Tuesday
    Donating money isn’t the only way you can help people. You can also give your blood. Of the approximately 62 percent of Americans eligible to donate blood, only 3 percent do so each year. But someone needs blood every few seconds in the US. While the average red blood cell transfusion is about three units, […]...
    Future Perfect | 2 days ago
  • Why Our Institutions Aren’t Ready for AI
    "You have to know it's a big deal." "You have to know that we're not prepared." "Some of them will not admit that to you, but look at their track record and you'll be able to see that they're not prepared for this."
    Future of Life Institute | 2 days ago
  • ‘Protestant Magic’ Today
    Nationalism and occultism in Ireland
    The Fitzwilliam | 2 days ago
  • Insulin Resistance and Glycemic Index
    In my previous post Traditional Food*, I explained how what we think of as a "traditional" diet is a nationalist propaganda campaign that's making us sick. In this post I'll go into the biological mechanisms. There are four substances that the body can metabolize: carbohydrates, fats, protein and alcohol.
    LessWrong | 3 days ago
  • Im Dezember zählt jede Spende doppelt
    Sentience Politics | 3 days ago
  • Writing in public is still underrated
    If you have ideas but never write them down, this post is for you.
    AI Safety Takes | 3 days ago
  • Three axes of consciousness
    or, why dualists can believe in conscious robots
    Experience Machines | 3 days ago
  • Things I read and liked in November
    Child-rearing, corrections, co-location, Claude's cultural conformity, consciousness conference
    Experience Machines | 3 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Adrià Garriga-Alonso
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Isaac King | Outside the Asylum
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.