Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • Training Qwen-1.5B with a CoT legibility penalty
    I tried training Qwen2.5-1.5B with RL on math to both get correct answers and have a CoT that doesn’t look like human-understandable math reasoning. RL sometimes succeeds at hacking my monitor, and when I strengthen my monitor, it fails at finding CoT that are both illegible and helpful, even after training for roughly 4000 steps (~1B generated tokens).
    LessWrong | 3 hours ago
  • Hospitalization: A Review
    I woke up Friday morning w/ a very sore left shoulder. I tried stretching it, but my left chest hurt too. Isn't pain on one side a sign of a heart attack?. Chest pain, arm/shoulder pain, and my breathing is pretty shallow now that I think about it, but I don't think I'm having a heart attack because that'd be terribly inconvenient.
    LessWrong | 5 hours ago
  • The Thinking Machines Tinker API is good news for AI control and security
    Last week, Thinking Machines announced Tinker. It’s an API for running fine-tuning and inference on open-source LLMs that works in a unique way. I think it has some immediate practical implications for AI safety research: I suspect that it will make RL experiments substantially easier, and increase the number of safety papers that involve RL on big models.
    AI Alignment Forum | 5 hours ago
  • The Thinking Machines Tinker API is good news for AI control and security
    Last week, Thinking Machines announced Tinker. It’s an API for running fine-tuning and inference on open-source LLMs that works in a unique way. I think it has some immediate practical implications for AI safety research: I suspect that it will make RL experiments substantially easier, and increase the number of safety papers that involve RL on big models.
    LessWrong | 5 hours ago
  • At odds with the unavoidable meta-message
    It is a truism known to online moderators that when two commenters are going back and forth in heated exchange, and one lays out rejoinders in paragraph after paragraph of dense text, then two things will have happened: Our careful communicator may or may not have succeeded at conveying well-reasoned insights hitherto unknown by his interlocutor that will change her mind.
    LessWrong | 6 hours ago
  • Realistic Reward Hacking Induces Different and Deeper Misalignment
    TL;DR: I made a dataset of realistic harmless reward hacks and fine-tuned GPT-4.1 on it. The resulting models don't show emergent misalignment on the standard evals, but they do alignment fake (unlike models trained on toy reward hacks), seem more competently misaligned, are highly evaluation-aware, and the effects persist when mixing in normal data.
    LessWrong | 7 hours ago
  • I take antidepressants. You’re welcome
    It’s amazing how much smarter everyone else gets when I take antidepressants. It makes sense that the drugs work on other people, because there’s nothing in me to fix. I am a perfect and wise arbiter of not only my own behavior but everyone else’s, which is a heavy burden because some of ya’ll are terrible at life. You date the wrong people.
    LessWrong | 7 hours ago
  • Towards a Typology of Strange LLM Chains-of-Thought
    Intro. LLMs being trained with RLVR (Reinforcement Learning from Verifiable Rewards) start off with a 'chain-of-thought' (CoT) in whatever language the LLM was originally trained on. But after a long period of training, the CoT sometimes starts to look very weird; to resemble no human language; or even to grow completely unintelligible. Why might this happen?.
    LessWrong | 7 hours ago
  • Nobel Season
    Above the Fold plays the waiting game
    Manifold Markets | 11 hours ago
  • Chinese dams hold billions of people to ransom
    Could desalination make them irrelevant?
    The Works in Progress Newsletter | 14 hours ago
  • How AI Will Transform Military Command and Control - Paul Scharre
    Listen now | A conversation with Paul Scharre, author of Four Battlegrounds: Power in the Age of Artificial Intelligence joins us to talk about
    Sentinel | 14 hours ago
  • Mercy For Animals secures historic win for plant-based food in the U.S. military
    Beginning in 2027, all vegetarian MREs will be replaced with vegan options WASHINGTON — In a groundbreaking move recently announced by Pentagon News, the U.S. military will replace its four vegetarian MREs (meals ready to eat) with fully plant-based versions in 2027. The change comes after years of advocacy by Mercy For Animals and its […].
    Mercy for Animals | 15 hours ago
  • If We Could Grok Heaven, Pascal's Wager Would Seem More Intuitive
    We would aim for heaven if we knew what it was like
    Bentham's Newsletter | 15 hours ago
  • The Thinking Machines Tinker API is good news for AI control and security
    It's a promising design for reducing model access inside AI companies.
    Redwood Research | 16 hours ago
  • An intense battle over the RAISE Act is entering its final stretch
    New York State's AI bill is more ambitious than California’s SB 53 — and is facing opposition from Andreessen Horowitz and other tech groups...
    Transformer | 16 hours ago
  • The RAISE Act can stop the AI industry’s race to the bottom
    Opinion: Assembly Member Alex Bores argues that regulation can prevent market pressure from encouraging the release of dangerous AI models, without harming innovation.
    Transformer | 16 hours ago
  • Transforming Research, Together: Shaping the Future of Openness and Rigor
    COS’s 2026–2028 Strategic Planning Process. As the global research system evolves—technologically, politically, and culturally—so must the organizations that support it. At the Center for Open Science (COS), we’re developing a bold and focused strategy for 2026–2028 to meet the moment and our shared future with clarity, collaboration, and impact. This planning process comes at a pivotal time.
    Center for Open Science | 18 hours ago
  • Why frontier AI can't solve this professor's math problem - Greta Panova
    Greta Panova wrote a math problem so difficult that today’s most advanced AI models don’t know where to begin.
    Epoch Newsletter | 18 hours ago
  • Sightsavers’ Abdulai Dumbuya wins award for inclusive education work
    Abdulai has been recognised at this year’s Presidential National Best Teachers Awards in Sierra Leone for his work to make education systems more inclusive of children with disabilities.
    Sightsavers | 19 hours ago
  • If we can’t control MechaHitler, how will we steer AGI?
    The post If we can’t control MechaHitler, how will we steer AGI? appeared first on 80,000 Hours.
    80,000 Hours | 20 hours ago
  • Affect entitlement
    Sometimes things are boring
    Contemplatonist | 20 hours ago
  • How well can large language models predict the future?
    When will artificial intelligence (AI) match top human forecasters at predicting the future? In a recent podcast episode, Nate Silver predicted 10–15 years. Tyler Cowen disagreed, expecting a 1–2 year timeline. Who’s more likely to be right?.
    Effective Altruism Forum | 21 hours ago
  • What to do when every crisis needs your $20
    Cori Jackson — a single mom living in Indiana — took in her two young nieces to keep them out of foster care this summer. It hasn’t been easy. The youngest still isn’t potty-trained. The oldest isn’t used to having food in the fridge so, sometimes, she eats so much it makes her sick. The […]...
    Future Perfect | 21 hours ago
  • What to do about near-term cluelessness in animal welfare
    (Context: I’m not an expert in animal welfare. My aim is to sketch a potentially neglected perspective on prioritization, not to give highly reliable object-level advice.). Summary: We seem to be clueless about our long-term impact. We might therefore consider it more robust to focus on neartermist causes, in particular animal welfare.
    Effective Altruism Forum | 21 hours ago
  • How AI may become deceitful, sycophantic... and lazy
    Disclaimers: I am a computational physicist, not a machine learning expert: set your expectations of accuracy accordingly. All my text in this post is 100% human-written without AI assistance. Introduction: The threat of human destruction by AI is generally regarded by longtermists as the most important cause facing humanity.
    Effective Altruism Forum | 21 hours ago
  • The Relationship Between Social Punishment and Shared Maps
    A punishment is when one agent (the punisher) imposes costs on another (the punished) in order to affect the punished's behavior. In a Society where thieves are predictably imprisoned and lashed, people will predictably steal less than they otherwise would, for fear of being imprisoned and lashed.
    LessWrong | 1 days ago
  • Hiring : Research Assistants at Georgetown University
    gui2de is hiring student RAs at Georgetown University for the academic year 2025-2026.
    Georgetown University Initiative on Innovation, Development and Evaluation | 1 days ago
  • Sobeys Puts Profits Over People, Animals and Promises
    Safeway employees across Alberta are sounding the alarm about Sobeys, their parent label owned by Empire Company Limited. Through its “Truck You, Sobeys” campaign, the United Food and Commercial Workers union accuses Sobeys of cutting delivery routes and reducing full-time jobs to protect profit margins — moves that hurt both workers and customers. This public […].
    Mercy for Animals | 1 days ago
  • Vegan Meals Ready-to-Eat (MREs) Coming to US Military Rations by 2027
    The post Vegan Meals Ready-to-Eat (MREs) Coming to US Military Rations by 2027 appeared first on Mercy For Animals.
    Mercy for Animals | 1 days ago
  • Spooky Collusion at a Distance with Superrational AI
    TLDR: We found that models can coordinate without communication by reasoning that their reasoning is similar across all instances, a behavior known as superrationality. Superrationality is observed in recent powerful models and outperforms classic rationality in strategic games. Current superrational models cooperate more often with AI than with humans, even when both are said to be rational.
    LessWrong | 1 days ago
  • Irresponsible Companies Can Be Made of Responsible Employees
    tl;dr: In terms of financial interests of an AI company, bankruptcy and the world ending are both equally bad. If a company acted in line with its financial interests , it would happily accept significant extinction risk for increased revenue. There are plausible mechanisms which would allow a company to act like this even if virtually every employee would prefer the opposite.
    LessWrong | 1 days ago
  • You Should Get a Reusable Mask
    A pandemic that's substantially worse than COVID-19 is a serious possibility. If one happens, having a good mask could save your life. A high quality reusable mask is only $30 to $60, and I think it's well worth it to buy one for yourself. Worth it enough that I think you should order one now if you don't have one already. But if you're not convinced, let's do some rough estimation.
    LessWrong | 1 days ago
  • Inoculation prompting: Instructing models to misbehave at train-time can improve run-time behavior
    This is a link post for two papers that came out today: Inoculation Prompting: Eliciting traits from LLMs during training can suppress them at test-time (Tan et al.). Inoculation Prompting: Instructing LLMs to misbehave at train-time improves test-time alignment (Wichers et al.).
    LessWrong | 1 days ago
  • Inoculation prompting: Instructing models to misbehave at train-time can improve run-time behavior
    This is a link post for two papers that came out today: Inoculation Prompting: Eliciting traits from LLMs during training can suppress them at test-time (Tan et al.). Inoculation Prompting: Instructing LLMs to misbehave at train-time improves test-time alignment (Wichers et al.).
    AI Alignment Forum | 1 days ago
  • Gemini bundling 📱, Sam Altman Q&A 💬, the React Foundation 👨‍💻
    TLDR AI | 1 days ago
  • Evaluating Gemini 2.5 Deep Think’s math capabilities
    Epoch Blog | 1 days ago
  • Hidden Open Thread 402.5
    Astral Codex Ten | 1 days ago
  • Prediction markets & many experts think authoritarian capture of the US looks distinctly possible
    The following is a quick collection of forecasting markets and opinions from experts which give some sense of how well-informed people are thinking about the state of US democracy. This isn't meant to be a rigorous proof that democracy is under threat (DM me for that), just a collection which I hope will get people thinking about what's happening in the US now.
    Effective Altruism Forum | 1 days ago
  • Experts & prediction markets think authoritarian capture of the US looks distinctly possible
    The following is a quick collection of forecasting markets and opinions from experts which give some sense of how well-informed people are thinking about the state of US democracy. This isn't meant to be a rigorous proof that democracy is under threat (DM me for that), just a collection which I hope will get people thinking about what's happening in the US now.
    Effective Altruism Forum | 1 days ago
  • Experts & markets think authoritarian capture of the US looks distinctly possible
    The following is a quick collection of forecasting markets and opinions from experts which give some sense of how well-informed people are thinking about the state of US democracy. This isn't meant to be a rigorous proof that this is the case (DM me for that), just a collection which I hope will get people thinking about what's happening in the US now.
    Effective Altruism Forum | 1 days ago
  • Can cash accelerate the end of extreme poverty? Taking the next big step in Malawi
    In Malawi, we’re answering a new question: can cash not only transform individual lives but entire communities, accelerating the end of extreme poverty? The evidence is clear that large, unconditional cash transfers help people escape extreme poverty. Now we’re testing how it works at scale and learning how to make it even more effective along the […]...
    GiveDirectly | 1 days ago
  • How Networks Vibrate: From Oscillators to Eigenmodes
    A math and engineering friendly tour of how networks “choose” to vibrate. At the Ekkolapto Polymath Salon @ Frontier Tower in San Francisco, Andrés Gómez Emilsson (QRI Director of Research) presents our program combining bottom-up oscillator simulations with top-down spectral graph theory to reveal a graph’s resonant modes and symmetries.
    Qualia Research Institute | 1 days ago
  • EA Forum Digest #261
    EA Forum Digest #261 Hello!. Draft Amnesty Week starts on Monday! Check out the “What posts would you like someone to write?” thread if you’d like some inspiration. Two weeks left to enter the ‘Essays on Longtermism’ Competition — the top prize is $1000. Also, the application deadline for EAGxSingapore is coming up on October 20. Enjoy the posts! :) .
    EA Forum Digest | 1 days ago
  • I take antidepressants. You’re welcome
    It’s amazing how much smarter everyone else gets when I take antidepressants. It makes sense that the drugs work on other people, because there’s nothing in me to fix. I am a perfect and wise arbiter of not only my own behavior but everyone else’s, which is a heavy burden because some of ya’ll are … Continue reading "I take antidepressants. You’re welcome"...
    Aceso Under Glass | 2 days ago
  • Plans A, B, C, and D for misalignment risk
    Different plans for different levels of political will
    Redwood Research | 2 days ago
  • Plans A, B, C, and D for misalignment risk
    I sometimes think about plans for how to handle misalignment risk. Different levels of political will for handling misalignment risk result in different plans being the best option. I often divide this into Plans A, B, C, and D (from most to least political will required). See also Buck's quick take about different risk level regimes.
    LessWrong | 2 days ago
  • Plans A, B, C, and D for misalignment risk
    I sometimes think about plans for how to handle misalignment risk. Different levels of political will for handling misalignment risk result in different plans being the best option. I often divide this into Plans A, B, C, and D (from most to least political will required). See also Buck's quick take about different risk level regimes.
    AI Alignment Forum | 2 days ago
  • The Homework: October 8, 2025
    Welcome to the October 8, 2025 Main edition of The Homework, the official newsletter of California YIMBY — legislative updates, news clips, housing research and analysis, and the latest writings from the California YIMBY team. News from Sacramento All of…. The post The Homework: October <span class="dewidow">8, 2025</span> appeared first on California YIMBY.
    California YIMBY | 2 days ago
  • The Pig Hates It
    How to Lose Friends and Infuriate People
    Linch Zhang | 2 days ago
  • The Second Coming as a Solution to the Boltzmann Brain Problem
    Defeating entropy, not just death
    Bentham's Newsletter | 2 days ago
  • Too many problems? How to choose what to focus on
    With so many urgent problems in the world, how do you decide where to focus? In this video, we’ll share a simple but powerful framework for choosing the areas where you can have the greatest impact. Care about all the things? Here’s how to cut through the noise, avoid spreading yourself too thin, and focus your energy where it really counts.
    Giving What We Can | 2 days ago
  • Aquaculture Fundamentals: What We Included & What We Left Out
    Today marks the release of Faunalytics’ Aquaculture Fundamentals! This blog gives you an overview of what you can find inside, and outlines what we left out and where you can learn more. The post Aquaculture Fundamentals: What We Included & What We Left Out appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Altruists are (often) interpersonally nicer
    If you spend too much time on X, you’ve probably seen this annoying heatmaps meme:
    Thing of Things | 2 days ago
  • What the GAIN AI Act could mean for chip exports
    A battle is simmering over proposals to make US chipmakers sell to domestic customers before exporting to countries such as China
    Transformer | 2 days ago
  • Adoption Without Adversity: BIPOC Experiences In Companion Animal Acquisition
    Despite adoption through a rescue or shelter being a highly ethical route in finding an animal companion, BIPOC-identifying individuals risk facing discrimination and rejection along the way. How can we make adoption a more empathetic and equitable experience?. The post Adoption Without Adversity: BIPOC Experiences In Companion Animal Acquisition appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Use Your Comms Skills for Animals! Remote Manager Role w/ 4-Day Work Week
    The Educated Choices Program (ECP) is seeking a full-time, remote Communications Manager to join our team! We are a nonprofit dedicated to creating a healthier, more sustainable world by empowering students and communities to make informed food choices.
    Animal Advocacy Forum | 2 days ago
  • October Brief | Visa restrictions didn't stop this consultant from making an impact
    October Brief | Visa restrictions didn't stop this consultant from making an impact 10 new roles from AI policy in the UK to global health research—closing soon. ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌...
    EACN Newsletter | 2 days ago
  • October Brief | Visa restrictions didn't stop this consultant from making an impact
    October Brief | Visa restrictions didn't stop this consultant from making an impact 10 new roles from AI policy in the UK to global health research—closing soon. ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌   ͏ ‌...
    EACN Newsletter | 2 days ago
  • Covering Null Results: How to Turn “Nothing” into News
    Covering Null Results: How to Turn “Nothing” into News A new article in The Open Notebook offers guidance for science journalists on how to cover null results—studies that find no significant effect—and why these findings are essential to the scientific process.
    J-PAL | 2 days ago
  • Direct cash transfers effective in poverty alleviation, says J-PAL's Iqbal Singh Dhaliwal
    Direct cash transfers effective in poverty alleviation, says J-PAL's Iqbal Singh Dhaliwal Hardware is hard but software is harder — India has built the hardware. Now the country must focus on the software – the people, their skills, their health and education, the economist tells Moneycontrol. spriyabalasubr… Wed, 10/08/2025 - 09:47...
    J-PAL | 2 days ago
  • Opinion | States need to improve finances to expedite poverty alleviation agenda
    Opinion | States need to improve finances to expedite poverty alleviation agenda States need to improve their financial health to achieve faster poverty reduction goals, J-PAL, part of the Massachusetts Institute of Technology (MIT), global executive director Iqbal Singh Dhaliwal has said. spriyabalasubr… Wed, 10/08/2025 - 09:46...
    J-PAL | 2 days ago
  • States need to improve finances to expedite poverty alleviation agenda
    States need to improve finances to expedite poverty alleviation agenda States need to improve their financial health to achieve faster poverty reduction goals, J-PAL, part of the Massachusetts Institute of Technology (MIT), global executive director Iqbal Singh Dhaliwal has said. spriyabalasubr… Wed, 10/08/2025 - 09:46...
    J-PAL | 2 days ago
  • Direct cash transfers: ‘Studies show that on average people save money for productive uses or invest’
    Direct cash transfers: ‘Studies show that on average people save money for productive uses or invest’ Iqbal Dhaliwal, Global Executive Director, J-PAL (Abdul Latif Jameel Poverty Action Lab), which is based at the Massachusetts Institute of Technology’s (MIT’s) economics department and works to reduce poverty by ensuring that policy is informed by scientific evidence, believes that cash...
    J-PAL | 2 days ago
  • Animal abusers are getting let off the hook. Trump and his Supreme Court are partly to blame.
    If someone illegally double parks in a one-way street and a cop walks by, the expectation is that they’d get fined. Similarly, you’d think that if a company that uses animals is caught mistreating them, they too would face some sort of legal repercussion. But for many businesses in the US, that’s not what’s happening. […]...
    Future Perfect | 2 days ago
  • Fiscal reform: J-PAL’s Dhaliwal urges states to fix finances for faster poverty reduction; calls for scheme rationalisation and focus on human capital
    Fiscal reform: J-PAL’s Dhaliwal urges states to fix finances for faster poverty reduction; calls for scheme rationalisation and focus on human capital Iqbal Singh Dhaliwal of J-PAL urges states to strengthen finances for poverty reduction, highlighting excessive welfare schemes and unproductive fund use.
    J-PAL | 2 days ago
  • AGI × Animals Wargame
    In September 2025 in Warsaw prior to CARE, 19 animal movement leaders came together to participate in the first ever AGI and animal welfare wargame. TLDR: Great-power moves and food-security shocks repeatedly overrode animal welfare efforts; standard campaigning tactics like mass mobilization struggled under political crackdowns and were overshadowed by government’s concerns about public...
    Effective Altruism Forum | 2 days ago
  • One Battle After Another as Biden Era Centrism
    Paul Thomas Anderson's newest film is a paean to moderate leftism
    Maximum Progress | 2 days ago
  • Replacing RL w/ Parameter-based Evolutionary Strategies
    I want to highlight this paper (from Sept 29, 2025) of an alternative to RL (for fine-tuning pre-trained LLMs) which: Performs better. Requires less data. Consistent across seeds. Robust (ie don't need to do a grid search on your hyperparameters). Less "Reward Hacking" (ie when optimizing for conciseness, it naturally stays close to the original model ie low KL-Divergence).
    LessWrong | 2 days ago
  • You Should Get a Reusable Mask
    A pandemic that's substantially worse than COVID-19 is a serious possibility. If one happens, having a good mask could save your life. A high quality reusable mask is only $30 to $60, and I think it's well worth it to buy one for yourself. Worth it enough that I think you should order one now if you don't have one already. But if you're not convinced, let's do some rough estimation.
    Effective Altruism Forum | 2 days ago
  • Mechanisms Rule Hypotheses Out, But Not In
    If there is no plausible mechanism by which a scientific hypothesis could be true, then it’s almost certainly false. But if there is a plausible mechanism for a hypothesis, then that only provides weak evidence that it’s true. An example of the former: Astrology teaches that the positions of planets in the sky when you’re born can affect your life trajectory.
    Philosophical Multicore | 2 days ago
  • Le bien-être des IA par Anthropic | EXTRAIT PODCAST
    L'épisode complet : https://youtu.be/gXNQGXnuE5I?si=ilE9M5N1ewC2xg4z
    The Flares | 2 days ago
  • Nvidia xAI deal 🤝, Tesla's cheaper models 🚗, Qualcomm acquires Arduino 💻
    TLDR AI | 2 days ago
  • What is the most active EA hub for someone with an EU passport
    Interested in AI (safety) research and pursuing a career of earning to give. Would love to meet a lot of fellow EAs/vegans/rationalist types while doing this. I'm currently in Belgium. And almost finished with my computer science masters degree from a ~150 ranked university.
    Effective Altruism Forum | 2 days ago
  • Bending The Curve
    The odds are against you and the situation is grim. Your scrappy band are the only ones facing down a growing wave of powerful inhuman entities with alien minds and mysterious goals. The government is denying that anything could possibly be happening and actively working to shut down the few people trying things that might help.
    LessWrong | 2 days ago
  • Petri: An open-source auditing tool to accelerate AI safety research
    This is a cross-post of some recent Anthropic research on building auditing agents. The following is quoted from the Alignment Science blog post. tl;dr: We're releasing Petri (Parallel Exploration Tool for Risky Interactions), an open-source framework for automated auditing that uses AI agents to test the behaviors of target models across diverse scenarios.
    LessWrong | 2 days ago
  • VICTORY: U.S. Military to Serve Millions of Plant-Based Meals
    After years of dedicated federal policy work led by Mercy For Animals, the U.S. military is making a monumental shift in its food procurement. Starting in 2027, the military will replace the four existing vegetarian MREs (meals ready to eat) with fully plant-based options. This policy change means that four of the 24 MRE menu […].
    Mercy for Animals | 2 days ago
  • Generalization and the Multiple Stage Fallacy?
    Doomimir: The possibility of AGI being developed gradually doesn't obviate the problem of the "first critical try": the vast hypermajority of AGIs that seem aligned in the "Before" regime when they're weaker than humans, will still want to kill the humans "After" they're stronger and the misalignment can no longer be "corrected". The speed of the transition between those regimes doesn't matter.
    LessWrong | 2 days ago
  • How America’s Regional Planning Boards Exclude Renters and Transit Users
    America’s regional planning boards are stacked against transit riders. While renters make up over 30% of households in typical metropolitan areas, they hold just 3% of seats on regional planning agencies that control billions of dollars in federal transportation funding.….
    California YIMBY | 3 days ago
  • The Legal Trap That’s Making California Condos Unaffordable—And It’s Not What You Think
    California’s construction defect liability system (the legal rules that let people sue builders for problems with new buildings) is adding up to $18,300 per unit to the costs of condominiums. What was supposed to protect consumers has become a barrier….
    California YIMBY | 3 days ago
  • A Cautious Celebration: Best Western Shows Progress
    After months of public pressure, Best Western has finally shared meaningful progress toward its global cage-free commitment — and because of this, we’re cautiously celebrating and pausing our public campaign. Best Western’s most recent statement has shown significant progress:
    Animal Advocacy Forum | 3 days ago
  • Sad and happy day
    Today, of course, is the second anniversary of the genocidal Oct. 7 invasion of Israel—the deadliest day for Jews since the Holocaust, and the event that launched the current wars that have been reshaping the Middle East for better and/or worse. Regardless of whether their primary concern is for Israelis, Palestinians, or both, I’d hope […]...
    Shtetl-Optimized | 3 days ago
  • AI can now do math. But can it ask good questions? - Ken Ono
    When mathematicians make breakthroughs, they hallucinate too.
    Epoch Newsletter | 3 days ago
  • A Core Area of Disagreement Between Theists and Atheists: Is Theism Ridiculous?
    Is believing in God like believing in Santa Claus?
    Bentham's Newsletter | 3 days ago
  • What Do Young Chinese And Japanese Consumers Think About Meat Alternatives?
    This study aims to identify the perceptions and preferences of young consumers in two of the largest meat substitute markets in the world. The post What Do Young Chinese And Japanese Consumers Think About Meat Alternatives? appeared first on Faunalytics.
    Faunalytics | 3 days ago
  • Inconsistent Anthropocentrism
    Animals < Humans < Nature?
    Good Thoughts | 3 days ago
  • GCBR Organization Updates, October 2025
    Updates from CEPI, NTI | bio, GHSN, Asia CHS, CSR, CCDD, Blueprint Biosecurity, UNIDIR, Brown Pandemic Center, CLTR, 1DaySooner, SecureBio, Sentinel Bio, MBDF, Open Philanthropy and IBBIS
    GCBR Organization Updates | 3 days ago
  • Request for Proposals: Meta Charity Funders - Fall 2025
    We will get back to you within two weeks from the deadline with a request for more information or an invitation to an interview if there is sufficient interest in your application. Decisions are communicated and grants paid out mid December.
    Effective Altruism Forum | 3 days ago
  • More Funding and Better Policies Are The Key Drivers For The Private Sector to Produce Safe And Nutritious Foods
    More Funding and Better Policies Are The Key Drivers For The Private Sector to Produce Safe And Nutritious Foods gloireri Tue, 10/07/2025 - 10:46 More Funding and Better Policies Are The Key Drivers For The Private Sector to Produce Safe And Nutritious Foods. Meet Naguti Scovia, founder of ABBA Quality Foods.
    Global Alliance for Improved Nutrition | 3 days ago
  • Free-Wheeling, Personality-Driven EA Conversations
    Chana Messinger (80k’s head of video, formerly of CEA’s Community Health team) and I (Matt Reardon, formerly of 80k) have been recording Expected Volume (Apple, Spotify), a podcast of unscripted conversations between us and and some great guests from across the community, including: Andy Masley. Phil Trammell. Conor Barnes. Daniel Filan. Alex Lawsen. Trevor Levin. Julia Wise.
    Effective Altruism Forum | 3 days ago
  • "Intelligence" -> "Relentless, Creative Resourcefulness"
    A frame I am trying on: When I say I'm worried about takeover by "AI superintelligence", I think the thing I mean by "intelligence" is "relentless, creative resourcefulness.". I think Eliezer argues something like "in the limit, superintelligence needs to include super-amounts-of Relentless, Creative Resourcefulness.".
    LessWrong | 3 days ago
  • Eliminating contrails from flying could be incredibly cheap
    Could we halve aviation's climate impact at a fraction of the cost of sustainable aviation fuels?
    Sustainability by Numbers | 3 days ago
  • Gratitude for life
    A few days ago, I got a call at 2:39 a.m.
    De novo | 3 days ago
  • [Article] Introducing Better Futures
    This is a narration of ‘Introducing Better Futures ’ by William MacAskill; published 3rd August 2025. Narration by Perrin Walker (@perrinjwalker).
    ForeCast | 3 days ago
  • Do Things for as Many Reasons as Possible
    Epistemic Status: A fun heuristic. Advice is directional. We all are constantly making decisions about how to spend our time and resources, and I think making these decisions well is one of the most important meta-skills one can possibly have. Time spent valuably can quickly add up into new abilities and opportunities, time spent poorly will not be returned.
    LessWrong | 3 days ago
  • ChatGPT apps 🤖, Starlink's power play 🛰️, OpenAI DevDay 👨‍💻
    TLDR AI | 3 days ago
  • THL is still looking for OWA Asia and Europe Corporate Strategy Leads!
    THL is hiring an OWA Asia-Pacific Corporate Strategy Lead and OWA Europe Corporate Strategy Lead. The ideal candidate will be a strategic thinker with excellent communication skills who has experience with corporate pressure campaigns. They will be responsible for building relationships with OWA member groups and providing advice, coaching, and training, which will translate to them securing...
    Animal Advocacy Forum | 3 days ago
  • The Origami Men
    Of course, you must understand, I couldn't be bothered to act. I know weepers still pretend to try, but I wasn't a weeper, at least not then. It isn't even dangerous, the teeth only sharp to its target. But it would not have been right, you know? That's the way things are now. You ignore the screams.
    LessWrong | 3 days ago
  • ALDF is hiring an Associate General Counsel
    Hello everyone,. ALDF is hiring an Associate General Counsel to work closely with me, the General Counsel, and Molly, our other Associate General Counsel. The job posting can be found here, and I'm copying the text of the post below. Please share with anyone in your network who could be a good fit. Kind regards,. John Seber. . Associate General Counsel. Fully Remote. Description.
    Animal Advocacy Forum | 3 days ago
  • Animal Place is Hiring: Executive Director (USA)
    Animal Place is a 501 (c)(3) animal sanctuary and is one of the oldest, largest, and most respected sanctuaries dedicated to farmed animals in the United States. Apply: Executive Director, Animal Place. Location: California, United States. Salary: $120,000 to $150,000, commensurate with experience.
    Animal Advocacy Forum | 3 days ago
  • Join Us for the Great Apes
    Hello! We are seeking support for the Great Apes Law in Spain — a law that would be historic. The Spanish Government had a legal mandate to present a specific law for the protection of great apes within three months of the approval of the Animal Welfare Law in 2023. Two years have passed and nothing has been done.
    Animal Advocacy Forum | 3 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.