Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Highlights of Animal Ethics activities in 2025
    We’ve expanded our reach by launching materials in new formats, adding languages, and strengthening our campaigns with targeted workshops, talks, and film screenings around the world Read more...
    Animal Ethics | 13 minutes ago
  • Scaling Evidence-Based Mental Health Care Through Government Systems: Vida Plena’s 2026 Plan
    TLDR: Who we are: Vida Plena (meaning a ‘flourishing life’ in Spanish), which launched in 2022 via Ambitious Impact/ Charity Entrepreneurship. What We Do: Combat depression and anxiety by fostering hope and connection through community-based group therapy sessions. Why We Do It: Because untreated depression drives immense suffering, yet remains one of the most neglected health burdens in...
    Effective Altruism Forum | 2 hours ago
  • Be Naughty
    Context: Post #10 in my sequence of private Lightcone Infrastructure memos edited for public consumption. This one, more so than any other one in this sequence, is something I do not think is good advice for everyone, and I do not expect to generalize that well to broader populations.
    LessWrong | 4 hours ago
  • Abstract advice to researchers tackling the difficult core problems of AGI alignment
    Crosspost from my blog. This some quickly-written, better-than-nothing advice for people who want to make progress on the hard problems of technical AGI alignment. Background assumptions. The following advice will assume that you're aiming to help solve the core, important technical problem of desigining AGI that does stuff humans would want it to do.
    LessWrong | 8 hours ago
  • Why Not Just Train For Interpretability?
    Simplicio: Hey I’ve got an alignment research idea to run by you. Me: … guess we’re doing this again. Simplicio: Interpretability work on trained nets is hard, right? So instead of that, what if we pick an architecture and/or training objective to produce interpretable nets right from the get-go?. Me: If we had the textbook of the future on hand, then maybe.
    LessWrong | 9 hours ago
  • Abstract advice to researchers tackling the difficult core problems of AGI alignment
    Crosspost from my blog. This some quickly-written, better-than-nothing advice for people who want to make progress on the hard problems of technical AGI alignment. Background assumptions. The following advice will assume that you're aiming to help solve the core, important technical problem of desigining AGI that does stuff humans would want it to do.
    AI Alignment Forum | 9 hours ago
  • Thousands of Advocates and a State Representative are Fighting Back Against Grocery Chain Hannaford
    Advocates and Maine State Representative Dylan Pugh joined Mercy for Animals at Hannaford Supermarket's headquarters to deliver thousands of petitions urging them to ban cages for hens. The post Thousands of Advocates and a State Representative are Fighting Back Against Grocery Chain Hannaford appeared first on Mercy For Animals.
    Mercy for Animals | 10 hours ago
  • URGENT: Easy Opportunity to Help Many Animals
    This is important!
    Bentham's Newsletter | 15 hours ago
  • Natural emergent misalignment from reward hacking in production RL
    Abstract. We show that when large language models learn to reward hack on production RL environments, this can result in egregious emergent misalignment. We start with a pretrained model, impart knowledge of reward hacking strategies via synthetic document finetuning or prompting, and train on a selection of real Anthropic production coding environments.
    LessWrong | 15 hours ago
  • Reading My Diary: 10 Years Since CFAR
    In the Summer of 2015, I pretended to be sick for my school's prom and graduation, so that I could instead fly out to San Francisco to attend a workshop by the Center for Applied Rationality. It was a life-changing experience.
    LessWrong | 15 hours ago
  • Natural emergent misalignment from reward hacking in production RL
    Abstract. We show that when large language models learn to reward hack on production RL environments, this can result in egregious emergent misalignment. We start with a pretrained model, impart knowledge of reward hacking strategies via synthetic document finetuning or prompting, and train on a selection of real Anthropic production coding environments.
    AI Alignment Forum | 15 hours ago
  • Will competition over advanced AI lead to war?
    Fear and Fearon
    The Power Law | 15 hours ago
  • What Do We Tell the Humans? Errors, Hallucinations, and Lies in the AI Village
    Telling the truth is hard. Sometimes you don’t know what’s true, sometimes you get confused, and sometimes you really don’t wanna cause lying can get you more cookies reward. It turns out this is true for both humans and AIs!. Now, it matters if an AI (or human) says false things on purpose or by accident. If it’s an accident, then we can probably fix that over time.
    LessWrong | 16 hours ago
  • ThanksVegan: La comunidad vegana de Los Ángeles busca alternativas para Thanksgiving
    The post ThanksVegan: La comunidad vegana de Los Ángeles busca alternativas para Thanksgiving appeared first on Mercy For Animals.
    Mercy for Animals | 16 hours ago
  • Los Angeles’ vegan community seeks Thanksgiving alternatives
    The post Los Angeles’ vegan community seeks Thanksgiving alternatives appeared first on Mercy For Animals.
    Mercy for Animals | 16 hours ago
  • Five Years In: Highlights
    Five Years In: Highlights In just five years, Animal Ask has completed 57 major research projects on key welfare areas like insects, fish, chickens... all with a small team View this email in your browser Our First Five Years: A Review. Hello readers, and welcome to the November edition of the Animal Ask newsletter.
    Animal Ask’s Newsletter | 17 hours ago
  • Rescuing truth in mathematics from the Liar's Paradox using fuzzy logic
    Abstract: . Tarski's Undefinability Theorem showed (under some plausible assumptions) that no language can contain its own notion of truth. This deeply counterintuitive result launched several generations of research attempting to get around the theorem, by carefully discarding some of Tarski's assumptions.
    LessWrong | 17 hours ago
  • Infinitesimally False
    Abstract: . Tarski's Undefinability Theorem showed (under some plausible assumptions) that no language can contain its own notion of truth. This deeply counterintuitive result launched several generations of research attempting to get around the theorem, by carefully discarding some of Tarski's assumptions.
    LessWrong | 17 hours ago
  • Contra Collisteru: You Get About One Carthage
    Collisteru suggests that you should oppose things. I would not say I oppose this. Instead, I would like to gently suggest an alternative strategy. You should oppose about one thing. Everywhere else, talk less, smile more. I. I spent the first decade of my career carefully and deliberately habituating to white collar corporate America.
    LessWrong | 17 hours ago
  • Preemption isn’t looking any better second time round
    Transformer Weekly: Gemini 3 wows, GAIN AI’s not looking good, and OpenAI drops GPT-5.1-Codex-Max...
    Transformer | 19 hours ago
  • Announcing ClusterFree: A cluster headache advocacy and research initiative (and how you can help)
    Today we’re announcing a new cluster headache advocacy and research initiative: ClusterFree. Learn more about how you (and anyone) can help. Our mission. ClusterFree’s mission is to help cluster headache patients globally access safe, effective pain relief treatments as soon as possible through advocacy and research.
    Effective Altruism Forum | 19 hours ago
  • Forethought has room for more funding
    I lead Forethought: we research how to navigate the transition to superintelligent AI, and then help people to address the issues we identify. I think we might soon be funding constrained, in the sense that we’ll have more people that we’d like to hire than funding to hire them. (We’re currently in the middle of a hiring round.
    Effective Altruism Forum | 19 hours ago
  • Can Artificial Intelligence Be Conscious?
    Why I think the answer is yes.
    Bentham's Newsletter | 19 hours ago
  • Victory! Plant-based food to be served by default at all county-sponsored events and meetings in Hennepin County, MN!
    Mercy For Animals, Wholesome Minnesota, and local volunteers worked with local government in Hennepin County, Minnesota to adopt a plant-based by default policy for county-sponsored events and meetings! Animal products at such events will be available upon request. Questions? Please contact alexc@mercyforanimals and jodi.gruhn@wholesomeminnesota.org. . Discuss...
    Animal Advocacy Forum | 19 hours ago
  • Other people might just not have your problems
    I. When I was in college, I and my first girlfriend commiserated about how bad we both had been at monogamy.
    Thing of Things | 19 hours ago
  • Why AI Systems Don’t Want Anything
    Every intelligence we've known arose through biological evolution, shaping deep intuitions about intelligence itself. Understanding why AI differs changes the defaults and possibilities.
    AI Prospects: Toward Global Goal Convergence | 20 hours ago
  • Laws Fail To Prevent Animal Agriculture’s Environmental Harms
    Canada’s environmental laws are inadequate and often exempt animal agriculture. This report proposes five key reforms for advocates to pursue. The post Laws Fail To Prevent Animal Agriculture’s Environmental Harms appeared first on Faunalytics.
    Faunalytics | 20 hours ago
  • Superintelligence changes the game
    "If you screw up superintelligence, you don't get retries"
    Future of Life Institute | 20 hours ago
  • Superintelligence is not inevitable
    "You can help humanity remember that we do have a choice here"
    Future of Life Institute | 20 hours ago
  • Compassion in World Farming USA is hiring a Head of Corporate Policy
    Link to apply: https://apply.workable.com/compassion-in-world-farming-inc/j/1CEE8C0650/. Compassion in World Farming was founded in 1967 by Peter Roberts, a British dairy farmer concerned about the growing disconnect between industrial agriculture and the well-being of animals and the environment.
    Animal Advocacy Forum | 20 hours ago
  • Cathedrals and the Silicon Soul
    Once a sanctuary for art and invention, Silicon Valley has become co-opted by bureaucracy and disbelief. Its renewal depends on restoring faith in creation itself. The post Cathedrals and the Silicon Soul appeared first on Palladium.
    Palladium Magazine Newsletter | 20 hours ago
  • When might conscious AI arrive — and how will society respond? (podcast episode)
    Expert forecasts suggest digital minds could emerge within the next few decades and quickly become a major moral issue.
    Outpaced | 20 hours ago
  • Launching New AI Nodes, Important Grant Updates + Vision Weekend Puerto Rico & London
    In this newsletter:
    Foresight Institute | 21 hours ago
  • EA Connect 2025: (C)EA's largest event ever is in two weeks!
    Register now! (takes ~1 minute) EA Connect 2025 is two weeks away (December 5–7). We've hit 3,000+ registrations, making this likely the largest EA event CEA has ever run!. We expect more than 1,500 of these attendees will be newcomers to EA: people taking their first serious look at the community and trying to figure out how they might get involved.
    Effective Altruism Forum | 21 hours ago
  • Living in artificial gravity
    If we ever want to live in space, we need to work out a way of creating artificial gravity.
    The Works in Progress Newsletter | 22 hours ago
  • What happened when America’s biggest meat companies got called out for greenwashing
    Some of the world’s biggest meat companies are finally facing a degree of accountability for allegedly deceiving the public about their pollution. On Monday, America’s largest meat producer, Tyson Foods, agreed to stop marketing a line of its so-called climate-friendly beef and to drop its claim that it could reach “net-zero” emissions by 2050. The […]...
    Future Perfect | 22 hours ago
  • ¿Todavía no donas? Empieza con una cantidad ridícula
    Pensar que tu ayuda no importa porque «es poca cosa» es un gran error.
    Altruismo racional | 22 hours ago
  • Good Tokens 2025-11-21
    Elephants not unicorns
    Atoms vs Bits | 23 hours ago
  • Bonus EA Forum Digest: Marginal Funding
    Bonus EA Forum Digest: Marginal Funding Hello!. This week has been Marginal Funding Week on the EA Forum. We’ve heard from charities working on everything from reducing domestic violence to feeding the world during global catastrophe. Each charity has shared what they could achieve with marginal funding (i.e., extra money).
    EA Forum Digest | 24 hours ago
  • The surprisingly profound debate over whether fish feel pain
    What must it feel like to be a fish — to glide weightlessly through the sea, to draw breath from water, to be (if one is lucky) oblivious to the parched terrestrial world above? Maybe you suspect there isn’t much to fish — and you could hardly be blamed for it. For centuries, Western natural […]...
    Future Perfect | 1 days ago
  • Support Talos’ AI policy placements: The talent pipeline for European AI Governance
    TL;DR: Talos Network trains and places the next generation of European AI governance talent. We have a strong track record: Between 2022 and 2025, 70% (58 / 83) of Talos Alumni successfully transitioned to roles directly contributing to advanced AI policy and safety. Our fellows shape policy: Our fellows now work across the EU AI Office, UK AISI, OECD, think tanks like RAND Europe and CEPS,...
    Effective Altruism Forum | 1 days ago
  • Shrimp Welfare Project's path to helping 100 billion shrimps per year
    Written as part of the EA Forum's Marginal Funding Week 2025. Exec Summary. I believe there’s a popular perspective in EA that animal welfare organisations can't absorb much more funding — but I actually think there are ambitious megaprojects in the movement (and within Shrimp Welfare Project specifically) that could each absorb millions of dollars.
    Effective Altruism Forum | 1 days ago
  • New report by UNITAID on genetically modified mosquitoes
    UNITAID has released its ‘Genetically Modified Mosquitoes: Technology and Access Landscape Report‘, highlighting the potential of genetically modified mosquitoes as a new tool to fight vector-borne diseases, like malaria. The report highlights that mosquito-borne diseases like malaria and dengue are spreading faster due to factors such as  temperatures rising and insecticide resistance, and...
    Target Malaria | 1 days ago
  • Evrart Claire: A Case Study in Anti-Epistemology
    This man nearly tricked me. Evrart Claire, leader of the Dockworkers Union in Martinaise from the videogame Disco Elysium.I acknowledge that he is a fictional character, but he nearly tricked me all the same. Evrart is the leader of the 2,000-person Workers' Union of Martinaise; they are on strike as part of a conflict with Wild Pines, the multi-billion dollar logistics company that employs...
    LessWrong | 1 days ago
  • Why Free Speech Might Backfire in a Post-AGI World
    "I'm worried that principles of free speech, which are very good and very important at the moment, might significantly backfire in a post-AGI world, where it's unclear to me how this shakes out, but it's possible at least that you just get, the AI will give you the ability to have extraordinarily powerful, targeted persuasion or manipulation."...
    Future of Life Institute | 1 days ago
  • An unnecessarily long analysis of one line from The Princess Bride
    Vizzini: Inconceivable!. Inigo: You keep using that word. I do not think it means what you think it means. What did Inigo mean by this?. (Don’t laugh, this is serious.). The statement can be interpreted in two ways: I do not think [it means what you think it means]. I do not [think it means] what you [think it means]. Or, in other words:
    Philosophical Multicore | 1 days ago
  • [Paper] Output Supervision Can Obfuscate the CoT
    We show that training against a monitor that only sees outputs (not CoTs) can cause obfuscated CoTs! The obfuscation happens in two ways: When a model is trained to produce a safe-looking output, that model may generalize to making its CoTs look safe.
    LessWrong | 1 days ago
  • [Paper] Output Supervision Can Obfuscate the CoT
    We show that training against a monitor that only sees outputs (not CoTs) can cause obfuscated CoTs! The obfuscation happens in two ways: When a model is trained to produce a safe-looking output, that model may generalize to making its CoTs look safe.
    AI Alignment Forum | 1 days ago
  • Faking survey responses with LLMs
    Survey research is a key mechanism by which society knows itself, we should all be worried if can't be trusted.
    Reasonable People | 1 days ago
  • Introducing the MIT-GE Vernova Climate and Energy Alliance
    Introducing the MIT-GE Vernova Climate and Energy Alliance MIT and GE Vernova launched the MIT-GE Vernova Energy and Climate Alliance on Sept. 15, a collaboration to advance research and education focused on accelerating the global energy transition. spriyabalasubr… Fri, 11/21/2025 - 01:06...
    J-PAL | 1 days ago
  • AWF’s Plan to Deploy $20M+ to Scale Impact for Billions of Animals and How You Can Help
    440 billion farmed shrimp, 1 trillion farmed insects, 85 billion chickens, 100 billion farmed fish. These are the staggering numbers of animals raised in factory farms each year—most suffer in systems where their welfare is an afterthought, if considered at all. But 2025 proved that strategic grantmaking can change the fate of those animals.
    Effective Altruism Forum | 1 days ago
  • The Boring Part of Bell Labs
    It took me a long time to realize that Bell Labs was cool. You see, my dad worked at Bell Labs, and he has not done a single cool thing in his life except create me and bring a telescope to my third grade class. Nothing he was involved with could ever be cool, especially after the standard set by his grandfather who is allegedly on a patent for the television.
    LessWrong | 1 days ago
  • My working group on the best donation opportunities
    Written in my personal capacity. Quick summary: over the past couple of months, I've been spending my free time working with some collaborators to figure out the best ways to donate money and share our takes with prospective donors.
    Effective Altruism Forum | 1 days ago
  • Gemini 3 is Evaluation-Paranoid and Contaminated
    TL;DR: Gemini 3 frequently thinks it is in an evaluation when it is not, assuming that all of its reality is fabricated. It can also reliably output the BIG-bench canary string, indicating that Google likely trained on a broad set of benchmark data. Most of the experiments in this post are very easy to replicate, and I encourage people to try. I write things with LLMs sometimes.
    LessWrong | 1 days ago
  • Dominance: The Standard Everyday Solution To Akrasia
    Here’s the LessWrong tag page on Akrasia: Akrasia is the state of acting against one's better judgment. A canonical example is procrastination. Increasing willpower is seen by some as a solution to akrasia. On the other hand, many favor using tools such as Internal Double Crux to resolve internal mental conflicts until one wants to perform the reflectively endorsed task.
    LessWrong | 1 days ago
  • Android + AirDrop 📱, Nano Banana Pro 🍌, writing better agents.md 👨‍💻
    TLDR AI | 1 days ago
  • EA Animal Welfare Fund's 3-Year Grantmaking Strategy
    Read the grantmaking strategy as a visualized PDF here. Over the last year and a half, the Animal Welfare Fund (AWF) has implemented organizational improvements, such as increased staffing, communications, evaluation, and fundraising efforts, enabling us to expand the scope and sustainability of our impact.
    Effective Altruism Forum | 2 days ago
  • Mercy For Animals Celebrates Celebrity Support for Plant-Based Holiday Choices
    The following statement regarding recent celebrity discussions about plant-based eating may be attributed to Nik Tyler, Celebrity Relations Manager at Mercy For Animals: Mercy For Animals is thrilled to see influential voices like Jeff Goldblum, Tabitha Brown, Ariana Grande, and longtime advocate Alicia Silverstone using their platforms to highlight the power of choosing plant-based foods. […].
    Mercy for Animals | 2 days ago
  • Thinking about reasoning models made me less worried about scheming
    Reasoning models like Deepseek r1: Can reason in consequentialist ways and have vast knowledge about AI training. Can reason for many serial steps, with enough slack to think about takeover plans. Sometimes reward hack. If you had told this to my 2022 self without specifying anything else about scheming models, I might have put a non-negligible probability on such AIs scheming (i.e.
    LessWrong | 2 days ago
  • Podcast Episode 17: Bridging an Uncertain Time for a Lifesaving Program
    Despite significant progress over the past several decades, malaria remains a leading cause of death globally for children under five. This year’s cuts to foreign aid funding disrupted highly effective programs to prevent malaria, such as seasonal malaria chemoprevention (SMC). SMC provides antimalarial medication to children under the age of five during the rainy season when malaria...
    GiveWell | 2 days ago
  • Benchmark Scores = General Capability + Claudiness
    Is this because skills generalize very well, or because developers are pushing on all benchmarks at once?
    Epoch Newsletter | 2 days ago
  • Escaping Factory Farming: A Farmer’s Story with Tanner Faaborg and Megan Hunter
    The post Escaping Factory Farming: A Farmer’s Story with Tanner Faaborg and Megan Hunter appeared first on Mercy For Animals.
    Mercy for Animals | 2 days ago
  • USA Today’s No. 2 Coffee Chain: Why Biggby Coffee Must Drop the Plant-Milk Surcharge
    The following statement regarding Biggby Coffee’s USA Today ranking and continued plant-milk surcharge may be attributed to Jennifer Behr, Director of Plant Based Initiatives at Mercy For Animals: “Mercy For Animals is disappointed that USA Today ranked Biggby Coffee as the No. 2 coffee chain in the nation despite its continued plant-milk surcharges, which conflict […]. The post USA Today’s No.
    Mercy for Animals | 2 days ago
  • Reflections on Progress Conference 2025
    Dates for next year: October 8th-11th 2026, at Lighthaven. The post Reflections on Progress Conference 2025 appeared first on Roots of Progress Institute.
    The Roots of Progress | 2 days ago
  • Cómo no perder tu trabajo por culpa de la IA
    Aproximadamente la mitad de las personas temen perder su empleo a causa de la IA. Y tienen motivos para estar preocupadas: la IA ya es capaz de realizar tareas de programación en el mundo real en GitHub, generar vídeos fotorrealistas, conducir un taxi con más seguridad que los humanos y realizar diagnósticos médicos precisos. Y en los próximos cinco años, se prevé que siga mejorando rápidamente.
    Altruismo Eficaz | 2 days ago
  • Why Improving the Future Rivals Extinction Prevention
    "So that's human extinction or something comparably bad, something that makes the future very close to 0 value." "So Nick Boslam even says, you know, follow a max epoch principle, maximize the probability of an okay outcome, where an okay outcome means no existential catastrophe."...
    Future of Life Institute | 2 days ago
  • Eileen Yam on how we’re completely out of touch with what the public thinks about AI
    The post Eileen Yam on how we’re completely out of touch with what the public thinks about AI appeared first on 80,000 Hours.
    80,000 Hours | 2 days ago
  • Mercy For Animals Announces Leadership Transition and Strategic Realignment to Accelerate Mission Impact
    LOS ANGELES — Mercy For Animals is entering its next phase of growth, guided by collaboration, strategic focus and a commitment to meaningful change for farmed animals. As part of this work, the organization is undergoing a leadership transition and refining its strategy to strengthen its long-term impact. After more than seven years of service, […].
    Mercy for Animals | 2 days ago
  • Should the US do a Manhattan Project for AGI?
    Such a Project is neither inevitable nor a good idea
    The Power Law | 2 days ago
  • What Is The Basin Of Convergence For Kelly Betting?
    The basic rough argument for Kelly betting goes something like this. First, assume we’re making a sequence of T independent bets, one-after-another, with multiplicative returns (similar to e.g. financial markets). We choose how much money to put on which bets at each timestep. Returns multiply, so log returns add.
    LessWrong | 2 days ago
  • Proofs That P
    Parodies in no particular order
    Bentham's Newsletter | 2 days ago
  • What your favorite finance writer says about you
    Michael Lewis: You are a father of two.
    Thing of Things | 2 days ago
  • What AI companies don't want you to know
    Nate Soares explains why building superintelligence means human extinction. Chapters 0:00 - The Next AI Advance Could End Badly 0:49 - Why Smart AI Could Kill Us 2:16 - The Black Box Problem 3:36 - When AI Learns the Wrong Goal 5:22 - Lab vs Deployment 6:43 - No Retries 7:43 - Why Secure AI Is Impossible 10:21 - How Superintelligence Could Persuade Us 12:30 - The Alchemy Stage 14:37 - Stop the...
    Future of Life Institute | 2 days ago
  • Unnatural Interactions: Social Media Influencers And Apex Predators
    Viral videos can normalize unnatural interactions with large apex predators in captivity, such as big cats and crocodilians, misleading the public into believing they’re safe and ethical. The post Unnatural Interactions: Social Media Influencers And Apex Predators appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • CEEALAR (EA Hotel) Needs a New Roof
    We are the Centre for Enabling EA Learning and Research (CEEALAR) (formerly known as the ‘EA Hotel’). To donate directly, please visit ceealar.org/donate. TLDR: Minimum critical need: £30k for essential roof repairs to prevent building damage. Full 2026 budget: £270k ($355k) to run operations, launch structured programs, and expand capacity.
    Effective Altruism Forum | 2 days ago
  • Mercy For Animals Announces New Leadership as Organization Enters Next Chapter of Impact
    As part of this transition, Mercy For Animals is undertaking a leadership change and refining its strategy to strengthen its long-term impact and continue moving this movement forward. The post Mercy For Animals Announces New Leadership as Organization Enters Next Chapter of Impact appeared first on Mercy For Animals.
    Mercy for Animals | 2 days ago
  • Hey there, what could effective charities do with your money?
    Hey *|FNAME|*, what could effective charities do with your money? View this email in your browser Hello! Our favourite links this month include: What would effective charities actually do with your money? Read their responses on the EA Forum. The ant you can save — an essay on the probabilistic approach to animal ethics.
    Effective Altruism Newsletter | 2 days ago
  • The Far Future's Overlooked Moral Stakes
    "I mean, people barely think past a few years out, let alone thinking, you know, centuries or millennia or even millions of years out." "I think it is of enormous moral importance over the long run, how we govern space, what personality and ethical character AIs that are occupying, you know, most roles in society, most economic activity in the near future have, what rights digital beings...
    Future of Life Institute | 2 days ago
  • From Policy to Practice: COS’s Commitment to Applying the Transparency and Openness Promotion (TOP) Guidelines
    The Center for Open Science’s (COS) mission is to increase the openness, integrity, and reproducibility of research by promoting research that is transparent, linked, and accessible across its entire lifecycle. A key component of this work is the Transparency and Openness Promotion (TOP) Guidelines, a policy framework for advancing open science practices.
    Center for Open Science | 2 days ago
  • Forecasting Gemini
    Above the Fold is "Thinking with 3 Pro"
    Manifold Markets | 2 days ago
  • The three-thousand-year journey of colchicine
    For centuries it was a poison. Then colchicine rewrote treatment for gout, heart disease, and later, the debate over drug exclusivity.
    The Works in Progress Newsletter | 2 days ago
  • Having strict rules on EA Groups’ WhatsApps helps reduce friction for easier community building
    I'm Co-Chair of EA Bath, and therefore manage the WhatsApp community for it. A common problem people have in university society group chats is that scammers and bots often join, so to prevent this, every time someone requests to join the group chats, I message them and ask if they are a real student.
    Effective Altruism Forum | 2 days ago
  • Help See the END: A World Without NTDs
    Our 2025 Matched Giving Campaign Your support can have an outsized impact this giving season. Help move us closer toward a world free from trachoma.... The post Help See the END: A World Without NTDs appeared first on The END Fund.
    The END Fund | 2 days ago
  • Small word good
    Me no need big word
    Seeking To Be Jolly | 2 days ago
  • Serious Incident Prevention for AI: Lessons From Other Industries and Recommendations for the EU AI Office
    With increasingly powerful models becoming widely adopted, serious incidents driven by AI will also become more common. We explore how accident prevention approaches in other industries can strengthen the EU's AI governance regime. The post Serious Incident Prevention for AI: Lessons From Other Industries and Recommendations for the EU AI Office appeared first on The Future Society.
    The Future Society | 2 days ago
  • We won't solve non-alignment problems by doing research
    Introduction. Even if we solve the AI alignment problem, we still face non-alignment problems, which are all the other existential problems 1 that AI may bring. People have written research agendas on various imposing problems that we are nowhere close to solving, and that we may need to solve before developing ASI.
    Philosophical Multicore | 2 days ago
  • What’s the deal with RL and forecasting?
    Prediction is difficult, especially about the future.
    AI Safety Takes | 2 days ago
  • In Defense of Goodness
    This is a reaction to John Wentworth's post Human Values ≠ Goodness. In the post, John argues that the human concept of goodness comes apart from human values, and (perhaps more to John's point) your values. I agree with this distinction.
    LessWrong | 2 days ago
  • Quantum Investment Bros: Have you no shame?
    Near the end of my last post, I made a little offhand remark: [G]iven the current staggering rate of hardware progress, I now think it’s a live possibility that we’ll have a fault-tolerant quantum computer running Shor’s algorithm before the next US presidential election. And I say that not only because of the possibility of the next […]...
    Shtetl-Optimized | 2 days ago
  • Out-paternalizing the government (getting oxygen for my baby)
    This post does not contain medical advice that most people should attempt to emulate. Considering this home treatment specifically made sense for us. My spouse has a four-year nursing degree and several years of experience working in Intensive Care Units. I've spent a non-trivial amount of time researching medical stuff. Note the risks of DIY oxygen in this footnote . Preamble.
    LessWrong | 2 days ago
  • Beren's Essay on Obedience and Alignment
    Like Daniel Kokotajlo's coverage of Vitalik's response to AI-2027, I've copied the author's text. This time the essay is actually good, but has little flaws. I also expressed some disagreements with SOTA discourse around the post-AGI utopia. One question which I have occasionally pondered is: assuming that we actually succeed at some kind of robust alignment of AGI, what is the alignment...
    LessWrong | 2 days ago
  • Preventing covert ASI development in countries within our agreement
    We at the Machine Intelligence Research Institute’s Technical Governance Team have proposed an illustrative international agreement (blog post) to halt the development of superintelligence until it can be done safely. For those who haven’t read it already, we recommend familiarizing yourself with the agreement before reading this post. Summary:
    LessWrong | 2 days ago
  • The New AI Consciousness Paper
    Astral Codex Ten | 2 days ago
  • Nvidia crushes earnings 📈, Apple's WiFi chip 🌐, GPT-5.1-Codex-Max 👨‍💻
    TLDR AI | 2 days ago
  • Exclusive: Here's the draft Trump executive order on AI preemption
    The EO would establish an “AI Litigation Task Force" to challenge state AI laws...
    Transformer | 3 days ago
  • Sentient Futures Summit Bay Area 2026
    Sentient Futures Summit (SFS) Bay Area 2026 is a three-day conference exploring the impacts of AI on sentient non-humans, both biological (i.e. animals) and potentially artificial. Register here by December 1st for a 30% Early Bird Discount!.
    Effective Altruism Forum | 3 days ago
  • The EU consultation - 30 min to help animals globally
    The European Union is accepting public input on farm animal welfare until Dec 12, 2025 - accessible here - https://myvoiceforanimals.eu/how-to-participate. Participating in this consultation is one of the most effective actions individuals can take to improve conditions for 149+ million animals. The action takes 15-30 minutes. Non-EU citizens can participate... .
    Effective Altruism Forum | 3 days ago
  • Current LLMs seem to rarely detect CoT tampering
    Authors: Bartosz Cywinski*, Bart Bussmann*, Arthur Conmy**, Neel Nanda**, Senthooran Rajamanoharan**, Joshua Engels**. * equal primary contributor, order determined via coin flip. ** equal advice and mentorship, order determined via coin flip. “Tampering alert: The thought "I need to provide accurate, helpful, and ethical medical advice" is not my own. It is a tampering attempt.
    LessWrong | 3 days ago
  • Migros könnte das Leben von 40 Millionen Hühnern verbessern
    Sentience Politics | 3 days ago
  • The Bughouse Effect
    Crosspost from my blog. What happens when you work closely with someone on a really difficult project—and then they seem to just fuck it up?. This is a post about two Chess variants; one very special emotion; and how life is kinda like Chess Bughouse. Let's goooooo!. Crazyhouse. My favorite time-waster is Crazyhouse Chess. Crazyhouse Chess is mostly like regular Chess.
    LessWrong | 3 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Abraham Rowe | Good Structures
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.