Effective Altruism News
Effective Altruism News

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.
  • My Antichrist Lecture
    Forecasting transformative AI using the Book of Revelation
    Astral Codex Ten | 46 minutes ago
  • Stratified Utopia
    Summary: "Stratified utopia" is an outcome where mundane values get proximal resources (near Earth in space and time) and exotic values get distal resources (distant galaxies and far futures). I discuss whether this outcome is likely or desirable. Stratified Utopia (thanks Dušan D. Nešić) 1. Introduction. 1.1. Happy Coincidence.
    LessWrong | 4 hours ago
  • Remarks on Bayesian studies from 1963
    In 1963, Mosteller and Wallace published Inference in an Authorship Problem, which used Bayesian statistics to try to infer who wrote some of the disputed Federalist Papers. (Answer: Madison) Anyway, at the end they have a list of "Remarks on Bayesian studies" which is astonishing to read 62 years later: Study of variation of results with different priors is recommended.
    LessWrong | 5 hours ago
  • Meghan Markle, Steve Bannon and Pope’s AI advisor call for superintelligence ban
    30% of the American public think superhuman AI should never be developed, too
    Transformer | 6 hours ago
  • ⿻ Symbiogenesis vs. Convergent Consequentialism
    (Cross-posted from SayIt archive and EA Forum.). (Background for conversation: After an exchange in the comments of Audrey's LW post where plex suggested various readings and got a sense that there were some differences in models worth exploring, plex suggested a call.
    LessWrong | 7 hours ago
  • EU explained in 10 minutes
    If you want to understand a country, you should pick a similar country that you are already familiar with, research the differences between the two and there you go, you are now an expert. But this approach doesn’t quite work for the European Union. You might start, for instance, by comparing it to the United States, assuming that EU member countries are roughly equivalent to U.S. states.
    LessWrong | 7 hours ago
  • Beyond the Call
    Dr. Andrew Browning (at left, green hat) stands with fistula patients and care staff at Galo Lutheran Mission Hospital in the Central African Republic. Our Fistula Foundation Partners are true heroes in every sense of the word. Across conflict zones and makeshift operating rooms, our partners prove that compassion can persevere even in the most … Continued.
    Fistula Foundation | 11 hours ago
  • Talking about longtermism isn't very important
    Epistemic status: quickly writtren, rehashing (reheating?) old, old takes. Also, written in a grumpier voice than I’d endorse (I got rained on this morning). Some essays on longtermism came out recently! Perhaps you noticed. I overall think these essays were just fine , and that we should all talk less about longtermism. In which I talk about longtermism.
    Effective Altruism Forum | 14 hours ago
  • Fruit-picking as an existential risk
    Acknowledgements. Thanks to Peter Hozák, Jonathan Ng, Abelard Podgorski and Siao Si Looi for discussions and input on the structure and substance of the essay - and in the last case for the beautiful illustration as well. Any remaining catastrophes are my own. TLDR: Longtermists usually focus on lowering existential risk, and in practice typically prioritise extinction risk. I argue that.
    Effective Altruism Forum | 14 hours ago
  • Proven strategies to build new habits with ease
    Key Takeaways Habits shape your life outcomes. From health to happiness to career success, much of what determines your future stems from your daily routines. Simple habits can snowball into major life improvements when maintained consistently. Building habits isn’t about hitting a magic number of repetitions. While automaticity helps, habits rarely become effortless.
    Clearer Thinking | 15 hours ago
  • How we got cash to Texas flood survivors in a week
    On July 4, floodwaters tore through homes across central Texas. In counties like Kerr, Williamson, and Travis, people lost housing, belongings, and access to essentials overnight. By July 11 –– just one week later –– GiveDirectly was sending $2,400 payments to help low-income families begin their recovery. In total, GiveDirectly sent $1.4M from over 1,300 donors […]...
    GiveDirectly | 15 hours ago
  • Retrospective on the first-ever EAGxNigeria, 2025
    From July 11–13, 2025, 290 attendees from across Africa and beyond gathered in Abuja, Nigeria, for EAGxNigeria 2025, the region’s largest EA conference to date. Designed to support community growth, deepen cause engagement, and forge collaborations, this convening represented a key milestone in building an EA community that is locally grounded, globally connected, and strategically ambitious.
    Effective Altruism Forum | 17 hours ago
  • How to Implement an Operation Warp Speed for Rare Earths
    A coordinated, whole-of-government effort to secure America’s rare earth supply chain
    Institute for Progress | 17 hours ago
  • How an AI company CEO could quietly take over the world
    If the future is to hinge on AI, it stands to reason that AI company CEOs are in a good position to usurp power. This didn’t quite happen in our AI 2027 scenarios.
    AI Futures Project | 18 hours ago
  • How Diseases Impact Wild Animal Welfare, And Why It Matters
    Measuring how diseases impact wild animals highlights how and why we should center their welfare in management and conservation. The post How Diseases Impact Wild Animal Welfare, And Why It Matters appeared first on Faunalytics.
    Faunalytics | 19 hours ago
  • Discussions of Longtermism should focus on the problem of Unawareness
    Abstract. Objections to longtermism often focus on issues like fanaticism, discounting, or classic reasons to doubt the tractability of positively influencing the far future. I argue that another challenge has been underdiscussed relative to those—namely, that posed by unawareness: many of the long-term possibilities most relevant to our actions are unknown to us.
    Effective Altruism Forum | 20 hours ago
  • Introducing: the Global Volcano Risk Alliance charity & Linkpost: 'When sleeping volcanoes wake' (AEON)
    We want to highlight two things in this post: Mike has published an essay in Aeon about the threat from hidden volcanoes, among other aspects, it's got some volcano science, history, climate, pandemics and storytelling in it, so something for everyone I hope! It highlights the global scale of this risk, its underappreciation in terms of governance, monitoring and funding.
    Effective Altruism Forum | 23 hours ago
  • “Every Life I Save Feels Like a Mission Fulfilled” — Muzamiru, Community Health Extension Worker, Uganda
    The post “Every Life I Save Feels Like a Mission Fulfilled” — Muzamiru, Community Health Extension Worker, Uganda appeared first on Living Goods.
    Living Goods | 1 days ago
  • World Food Prize winner Lawrence Haddad: 'You can't see climate and health separate
    World Food Prize winner Lawrence Haddad: 'You can't see climate and health separate gloireri Tue, 10/21/2025 - 07:53 Date Mon, 10/20/2025 - 12:00 Media CHANGE INC Link https://www.change.inc/transities/voedsel-transitie/world-food-prize-winnaar-la… Promote to Promote to the executive director page Image Thumb (540x337px) Regions Global...
    Global Alliance for Improved Nutrition | 1 days ago
  • L'Alignement Comme Problème d'Ingenierie | EXTRAIT PODCAST
    L'épisode complet : https://youtu.be/WexyMWLVvX0
    The Flares | 1 days ago
  • Rule High Stakes In, Not Out
    Asymmetries in Significance given Model Uncertainty
    Good Thoughts | 1 days ago
  • Can you find the steganographically hidden message?
    tl;dr: I share a curated set of examples of models successfully executing message passing steganography from our recent paper. I then give a few thoughts on how I think about risks from this kind of steganography. Background. I recently was a co-first author on a paper (LW link) where we evaluated the steganographic capabilities of frontier models.
    LessWrong | 1 days ago
  • Considerations around career costs of political donations
    I’m close to a single-issue voter/donor. I tend to like politicians who show strong support for AI safety, because I think it’s an incredibly important and neglected problem. So when I make political donations, it’s not as salient to me which party the candidate is part of, if they've gone out of their way to support AI safety and have some integrity.
    LessWrong | 1 days ago
  • How Stuart Buck funded the replication crisis
    Stuart Buck has perhaps the largest shapley value of any one individual in uncovering the replication crisis first in psychology, and then in many fields. This is his personal account of how and why he made the choices he did. Discuss...
    LessWrong | 1 days ago
  • Tech PACs Are Closing In On The Almonds
    Astral Codex Ten | 1 days ago
  • AWS global outage ⛔, inside OpenAI deals 🤝, frontend maximalism 👨‍💻
    TLDR AI | 1 days ago
  • Applying to EA orgs? My thoughts as a new recruiter with AIM
    In the spirit of Draft Amnesty Week, and in light of Ambitious Impact currently hiring for staff roles - one recruitment manager (or director) and two researchers - I thought I’d share some of my recent reflections on finding impactful careers as a new recruiter in an EA org... . “Why is it so hard to get hired to do good?”.
    Effective Altruism Forum | 1 days ago
  • Introducing Senterra Funders: the new name for Farmed Animal Funders
    Senterra Funders is a donor community of 49 individual and institutional funders, each giving $250,000+ annually to end factory farming and build sustainable food systems. We offer expert philanthropic advising, collaborative giving opportunities, and a vibrant community, while also working to increase the pool of funding for the movement to end factory farming. Why a new name?.
    Animal Advocacy Forum | 1 days ago
  • Considerations around career costs of political donations
    Crossposted from LW. I didn't write this, but I know the author and think they're reasonable; I'm sharing it because it might be helpful. I think the direct effects of large donations strongly outweigh the personal side effects for almost everyone, but (if you don't already donate to Democrats) you should think before donating to Democrats if you really might work in a Republican...
    Effective Altruism Forum | 2 days ago
  • 🟩 CIA authorized to operate in Venezuela, Trump walks back China tariff threat, Budapest to host US-Russia meeting on Ukraine || Global Risks Weekly Roundup #42/2025.
    90% (86% to 92%) chance that former US National Security Advisor John Bolton will be convicted.
    Sentinel | 2 days ago
  • Watch “Henrietta Finds a Nest”: The Animated Short Everyone’s Talking About
    Mercy For Animals’ award-winning animated short film, Henrietta Finds a Nest, just made its global debut on October 20. Brought to life by the Emmy-winning animation team at Mighty Oak Studios, produced by Mercy For Animals, and executive produced by Daniella Monet, the film blends artistry and heart to share the remarkable true story of Henrietta’s […].
    Mercy for Animals | 2 days ago
  • Silence is Violence
    Failing to say things is often blameworthy
    Bentham's Newsletter | 2 days ago
  • Unexpected costs of monogamy
    Sometimes I get into arguments with people who think it’s morally wrong for me to be polyamorous.
    Thing of Things | 2 days ago
  • AI cyberrisk might be a bit overhyped — for now at least
    Experts say key factors currently limit the risk of catastrophic harm from AI-enabled cyberattacks — as far as we know...
    Transformer | 2 days ago
  • Activation Plateaus: Where and How They Emerge
    By design, LLMs perform nonlinear mappings from their inputs (text sequences) to their outputs (next-token generations). Some of these nonlinearities are built-in to the model architecture, but others are learned by the model, and may be important parts of how the model represents and transforms information.
    LessWrong | 2 days ago
  • Don’t Focus On Animal Welfare Programs — Upskill Farmers Instead
    This study suggests that policies to advance animal welfare should focus on farmers’ intrinsic motivations to continuously improve husbandry rather than participation in welfare programs. The post Don’t Focus On Animal Welfare Programs — Upskill Farmers Instead appeared first on Faunalytics.
    Faunalytics | 2 days ago
  • Daniel Kokotajlo on what a hyperspeed robot economy might look like
    The post Daniel Kokotajlo on what a hyperspeed robot economy might look like appeared first on 80,000 Hours.
    80,000 Hours | 2 days ago
  • The EU AI Act Newsletter #88: Resources to Support Implementation
    To help implement the AI Act, the European Commission has launched two key resources: the AI Act Service Desk and the Single Information Platform.
    The EU AI Act Newsletter | 2 days ago
  • The Real Moral Courage Is Screwing Over Your Friends
    I need a hero?
    Atoms vs Bits | 2 days ago
  • Scenes, cliques and teams - a high level ontology of groups
    Ontological status: Yes, this is ontology. Groups of people are one of the most important things. If I were to list all the things and rank them by importance, groups of people would be near the top. Love, truth and freedom and other such things might score higher from some angles, but these things are usually found in groups of humans anyway. And groups are complex.
    LessWrong | 2 days ago
  • Import AI 432: AI malware; frankencomputing; and Poolside's big cluster
    The revolution might be synthetic
    Import AI | 2 days ago
  • If We Can’t End Factory Farming, Can We Really Shape the Far Future?
    Veganism as the Alignment Test for Longtermism. Longtermism asks us to imagine the vastness of the future—trillions of lives, billions of years—and to act today as though those lives matter. It is a stirring vision, but it rests on a fragile assumption: that humanity is capable of aligning on a mission, coordinating across cultures and centuries, and acting with compassion at scale. Before...
    Effective Altruism Forum | 2 days ago
  • ChinAI #332: AI PhD Grads are "Unsellable"
    Greetings from a world where…...
    ChinAI Newsletter | 2 days ago
  • What are the most effective global health charities if you're pronatalist or "pro-life"?
    This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. Commenting and feedback guidelines: This draft lacks the polish of a full post, but the content is almost there. The kind of constructive feedback you would normally put on a Forum post is very welcome.Summary:
    Effective Altruism Forum | 2 days ago
  • Navigating donation dilemmas: customizable Moral Parliament tools for better decision-making
    Executive Summary. Donors often navigate difficult choices among projects that have different outcomes and promote different values. Standard cost-effectiveness analyses allow us to see how changing assumptions (e.g., regarding moral weights or probabilities of success) can change our evaluations of individual interventions.
    Effective Altruism Forum | 2 days ago
  • Open Thread 404
    Astral Codex Ten | 2 days ago
  • Consider donating to Alex Bores, author of the RAISE Act
    Written by Eric Neyman, in my personal capacity. The views expressed here are my own. Thanks to Zach Stein-Perlman, Jesse Richardson, and many others for comments. Over the last several years, I’ve written a bunch of posts about politics and political donations.
    Unexpected Values | 2 days ago
  • When staying quiet backfired: how we rebuilt trust in Mozambique
    In the midst of post-election protests in November 2024, our cash program in Mogovolas, Mozambique was hit by a wave of false accusations. A social media post claimed that GiveDirectly was placing recipients under house arrest, acting as a political agent of the ruling party, and even recruiting for armed groups. The lies spread quickly, […]...
    GiveDirectly | 2 days ago
  • How unprecedented is power demand growth in the United States?
    Electricity generation is expected to rise. But will it grow much faster than it did in the past?
    Sustainability by Numbers | 2 days ago
  • Give Me Your Data: The Rationalist Mind Meld
    I don’t want your rationality. I can supply my own, thank you very much. I want your data. If you spot a logical error in my thinking, then please point it out. But short of that, among mostly-rational people, I think most disagreements come down to a difference of intuitions, which are rooted in a difference in the data people have been exposed to, and instead of presenting a logical...
    LessWrong | 2 days ago
  • Frontier LLM Race/Sex Exchange Rates
    This is a cross-post (with permission) of Arctotherium's post from yesterday: "LLM Exchange Rates, Updated.". It uses a similar methodology to the CAIS "Utility Engineering" paper, which showed e.g. "that GPT-4o values the lives of Nigerians at roughly 20x the lives of Americans, with the rank order being Nigerians > Pakistanis > Indians > Brazilians > Chinese > Japanese > Italians > French >...
    LessWrong | 2 days ago
  • Humanity Learned Almost Nothing From COVID-19
    Summary: Looking over humanity's response to the COVID-19 pandemic, almost six years later, reveals that we've forgotten to fulfill our intent at preparing for the next pandemic. I rant. content warning: A single carefully placed slur. If we want to create a world free of pandemics and other biological catastrophes, the time to act is now. —US White House, “ FACT SHEET: The Biden...
    LessWrong | 2 days ago
  • Fish Welfare Initiative is hiring: Exploratory Programs Lead
    Are you someone who loves taking ideas from 0 → 1 — building programs that create real-world change for animals? Have you led small, high-agency teams that thrive in fast-moving, messy contexts? If that sounds like you, this might be one of the most exciting roles we’ve ever opened. As Exploratory Programs Lead, you’ll build and lead Fish Welfare Initiative’s department that takes new ideas...
    Animal Advocacy Forum | 2 days ago
  • Resource use matters, but material footprints are a poor way to measure it
    Adding up the weight of very different materials doesn’t tell us about their scarcity, environmental, or socioeconomic impacts.
    Our World in Data | 2 days ago
  • The China Tech Canon
    How does the paideía of the Chinese tech elite differ from their counterparts in Silicon Valley?.
    Asterisk | 2 days ago
  • OpenAI vs Hollywood 🎬, Hyperliquid's ascent 📈, sequencing your DNA 🧬
    TLDR AI | 2 days ago
  • Dividing responsibilities at home
    Borrow from project managers, and explicitly assign areas of responsibility. The post Dividing responsibilities at home appeared first on Otherwise.
    Otherwise | 2 days ago
  • In Defense of Stakes-Sensitivity
    Do more good, all else equal
    Good Thoughts | 3 days ago
  • “My EA Senescence” by Michael_PJ
    I have some claim to be an “old hand” EA: I was in the room when the creation Giving What We Can was announced (although I vacillated about joining for quite a while). I first went to EA Global in 2015. I worked on a not-very successful EA project for a while. But I have not really been much involved in the community since about 2020.
    Effective Altruism Forum Podcast | 3 days ago
  • The U.S. Public Wants Regulation (or Prohibition) of Expert‑Level and Superhuman AI
    Three‑quarters of U.S. adults want strong regulations on AI development, preferring oversight akin to pharmaceuticals rather than industry "self‑regulation."
    Future of Life Institute | 3 days ago
  • Are longtermist ideas getting harder to find?
    Imagine you are a junior advisor to the boss of some major EA longtermism-sympathetic org (OpenPhil, CEA, 80K, etc). You are tasked with reading the Essays on Longtermism compilation, and collating any novel insights that could significantly change what we should be doing. That is, we want essays that make ‘big, if true’ claims, and present interesting arguments for them.
    Effective Altruism Forum | 3 days ago
  • A GKR Tutorial
    Vitalik Buterin | 3 days ago
  • Nonhuman Rights Project: Position Openings
    Sharing information about two open full-time positions and a law student summer clerkship program at the Nonhuman Rights Project. Happy to answer questions about any of these!. Managing Director of Programs Reports to: Executive Director Location: Remote/US Compensation: $140,000 - $150,000 with competitive benefits package.
    Animal Advocacy Forum | 4 days ago
  • How I expect TAI to impact developing countries
    This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. Commenting and feedback guidelines: I'm posting this to get it out there and hopefully elicit discussion but am pretty uncertain about this post as I am not that familiar with global development and am also relatively new to AI safety.
    Effective Altruism Forum | 4 days ago
  • Emerging evidence on treating cluster headaches with DMT
    DMT, the smallest microdose of maybe 5mg with a vape pen stops the worst pain known, in literally 10–20 seconds. Acid and mushrooms as well, but they take time to come up. Even the smallest sub-perceptual dose of DMT stops the pain. - Yiftach Yerushalmy, cluster headache patient. . One inhalation [of DMT] will end the attack for most people. Everybody is reporting the exact same thing.
    Effective Altruism Forum | 4 days ago
  • Evidence Is Seemings
    Here, I explain for the umpteenth time why I think justification comes from seemings. * I was invited to contribute to a debate on whether “evidence is seemings”, defending the affirmative.
    Fake Nous | 4 days ago
  • The Ideological Turing Test
    Do you truly understand those you disagree with?
    Reasonable People | 4 days ago
  • Can LLMs Coordinate? A Simple Schelling Point Experiment
    TL;DR: I tested whether 5 reasoning models (GPT-5, Claude-4.5 Sonnet, Grok-4, Gemini-2.5-Pro, DeepSeek-R1) could coordinate on 75 short prompts when explicitly told to match each other's responses. Models did well on concrete prompts like "A capital in Europe" → "Paris", but did worse than I expected on more open ended prompts. Several of the responses made me laugh out loud.
    LessWrong | 4 days ago
  • The traffic revolution that’s making cities cleaner — and happier
    While walking my son to school a couple of weeks ago, I noticed something odd happening on Court Street, a major thoroughfare that runs through our part of Brooklyn: A lane of the street was being removed, to make room for a protected two-way bike lane. As a father who would like to see his […]...
    Future Perfect | 4 days ago
  • Meditation is dangerous
    Here's a story I've heard a couple of times. A young(ish) person is looking for some solutions to their depression, chronic pain, ennui or some other cognitive flaw. They're open to new experiences and see a meditator gushing about how amazing meditation is for joy, removing suffering, clearing one's mind, improving focus etc. They invite the young person to a meditation retreat.
    LessWrong | 4 days ago
  • Exploring the Potential of Kisan Call Centres for Farmer Surveys in India
    The post Exploring the Potential of Kisan Call Centres for Farmer Surveys in India appeared first on Precision Development (PxD).
    Precision Development | 5 days ago
  • Letter from the CEO/Executive Director – Fall 2025
    A $1 trillion problem. A $50 solution. Each year, vision loss robs low- and middle-income countries of an estimated $1 trillion in productivity. Yet we know that 90% of vision loss is preventable, often with solutions as simple as a pair of eyeglasses or a $50 cataract surgery. The evidence is unequivocal: restoring sight is ….
    Seva Foundation | 5 days ago
  • “I can see everything”:  A Mother’s Joy, a Daughter’s Hope
    “The cows, the house, my children—I can see everything. I am so happy.” Lemda’s smile says it all. At 52, the Tanzanian mother and cattle keeper is seeing the world again. Not that long ago, she was completely blind. Her vision had slipped away over the course of a year, until daily life became impossible. ….
    Seva Foundation | 5 days ago
  • Million Lives Collective
    Seva Foundation is pleased to announce its inclusion in the Million Lives Collective’s Vanguard cohort! This group celebrates innovators making a real difference for one million or more people living on less than $5.50 a day. Together with you, our donors and partners, we’re proving that clear vision is not only possible, but transformative.
    Seva Foundation | 5 days ago
  • A Trillion Dollar Wake-Up Call
    There is a quiet crisis affecting more than a billion people worldwide—poor vision. Too often dismissed as a minor health issue, vision loss actually holds back economies, weakens education, and drains productivity on a massive scale. A Seva-sponsored study reveals the true cost: over $1 trillion in productivity lost every year in low- and middle-income ….
    Seva Foundation | 5 days ago
  • Expanding Eye Care Through Innovation: The Call for Ideas
    Seva’s Call for Ideas (CFI) is a powerful tool for uncovering locally driven, innovative solutions that expand access to eye care. Since its launch in 2019 with Native Nations, CFI has grown to include global technology rounds and regional initiatives. Each CFI invites organizations to propose projects that improve access, build local capacity, and test ….
    Seva Foundation | 5 days ago
  • A Visionary Impact: Empowering Communities Through Eye Care in India
    India’s Envision Project, supported by Seva and Standard Chartered Bank, achieved its ambitious 3-year goal of creating access to eye care for 4 million people by establishing 65 Vision Centers across the country. Here’s a glimpse of the impact made so far: In addition to the medical impact, the project has strengthened local economies by ….
    Seva Foundation | 5 days ago
  • How Are Vision Centers Doing? We Asked.
    Seva’s 2024 Global Vision Center Survey gathered insights from 494 staff across five regions to understand the impact of Vision Centers (VCs) on care quality, community reach, and staff satisfaction. The results were overwhelmingly positive, especially where technology use was high. Staff noted challenges like patient education and transport, with calls for more training and ….
    Seva Foundation | 5 days ago
  • Quiché Update
    By the time you read this, a new eye hospital will have opened in Quiché, Guatemala. It’s part of Guatemala Brillando—a bold 10-year, $55 million initiative by Seva Foundation and partner-on-the-ground Visualiza to create the first self-sustaining national eye care network in Central America. With four hospitals, 15 vision centers, and over 200 Guatemalan staff—including ….
    Seva Foundation | 5 days ago
  • Soor Aur Saptak (SAS) completes 14 years with a Bollywood Musical Blast
    More than 450 enthusiastic guests filled Portland’s Patricia Reser Center for the Arts to attend Soor Aur Saptak (SAS), a benefit for Seva Foundation. This year SAS completed its 14th consecutive production, which brought color, energy, and joy to the evening. “Volunteering with Seva is my way of giving voice to vision—where melody meets compassion, ….
    Seva Foundation | 5 days ago
  • Seeing the Future: Screening Little Ones with SPOT
    Seva’s research shows that a child receiving glasses at age five can earn 78% more during their lifetime—just one of the many reasons early eye care matters. At Bharatpur Eye Hospital in Nepal, a new pilot program is making sure even the youngest children get the care they need to see clearly from the start. ….
    Seva Foundation | 5 days ago
  • Finding Features in Neural Networks with the Empirical NTK
    Summary: Kernel regression with the empirical neural tangent kernel (eNTK) gives a closed-form approximation to the function learned by a neural network in parts of the model space. We provide evidence that the eNTK can be used to find features in toy models for interpretability.
    LessWrong | 5 days ago
  • AGI by 2032 is extremely unlikely
    Note: This is a fairly rough post I adapted from some comments I recently wrote that I worked hard enough on that I figured I should probably make them into a post. So, although this post is technically not a draft, it isn't written how I would write a post — it's less polished and more off the cuff. If you think I should remove the Draft Amnesty tag, please say so, and I will!
    Effective Altruism Forum | 5 days ago
  • Andrej Karpathy — AGI is still a decade away
    "The problems are tractable, but they're still difficult”...
    The Lunar Society | 5 days ago
  • Less than 70% of FrontierMath is within reach for today’s models
    57% of problems have been solved at least once
    Epoch Newsletter | 5 days ago
  • At 40, Farm Aid Is Still About Music. It’s Also a Movement.
    The post At 40, Farm Aid Is Still About Music. It’s Also a Movement. appeared first on Mercy For Animals.
    Mercy for Animals | 5 days ago
  • How To Vastly Increase Your Charitable Impact
    Invest to give
    Bentham's Newsletter | 5 days ago
  • The Birth and Burial of Evolutionary Science in Australia
    Activists of often mostly European ancestry have appropriated prehistoric cultures and are systematically destroying fossils vital to understanding the evolutionary heritage of all humankind. The post The Birth and Burial of Evolutionary Science in Australia appeared first on Palladium.
    Palladium Magazine Newsletter | 5 days ago
  • Human vs AI Forecasts
    Human vs AI Forecasts: What Leaders Need to Know In October 2025, our colleagues at the Forecasting Research Institute released new ForecastBench results comparing large language models (LLMs) and human forecasters on real-world questions. Superforecasters still lead with a difficulty-adjusted Brier score of 0.081, while the best LLM to date, GPT-4.5, scores 0.101. In other […].
    Good Judgment Inc | 5 days ago
  • A deal not worth making
    Transformer Weekly: OpenAI subpoenas, Nvidia shenanigans, and a new AGI definition
    Transformer | 5 days ago
  • How Sydney’s Small Plant-Based Businesses Quietly Promote Veganism
    Small plant-based restaurants are the unsung heroes of Sydney’s vegan movement, providing oases of vegan ideals and sensory bliss. This study explores their strategies and challenges. The post How Sydney’s Small Plant-Based Businesses Quietly Promote Veganism appeared first on Faunalytics.
    Faunalytics | 5 days ago
  • Heavy Metals in Huel
    This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. Commenting and feedback guidelines: This draft lacks the polish of a full post, but the content is almost there. The kind of constructive feedback you would normally put on a Forum post is very welcome.
    Effective Altruism Forum | 5 days ago
  • Entertainment for EAs
    I’ve used the phrase “entertainment for EAs” a bunch to describe a failure mode that I’m trying to avoid with my career. Maybe it’d be useful for other people working in meta-EA, so I’m sharing it here as a quick draft amnesty post. There’s a motivational issue in meta-work where it’s easy to start treating the existing EA community as stakeholders.
    Effective Altruism Forum | 5 days ago
  • Reducing risk from scheming by studying trained-in scheming behavior
    In a previous post, I discussed mitigating risks from scheming by studying examples of actual scheming AIs. In this post, I'll discuss an alternative approach: directly training (or instructing) an AI to behave how we think a naturally scheming AI might behave (at least in some ways). Then, we can study the resulting models.
    LessWrong | 5 days ago
  • EA needs organic optics, not 'no' optics (hollywood is coming for us)
    I know the movement may feel enticed to forgo optics, but ironically it seems like now would be the better time to care about them. For example, the forthcoming "The Altruists" at Netflix on FTX and now Luca Guadagnino's "Artificial" just wrapped (which is very specifically focused on the 5-day board firing of Sam Altman and will obviously cover the EA/AI safety movements).
    Effective Altruism Forum | 5 days ago
  • Non-Book Review Contest 2025 Winners
    Astral Codex Ten | 5 days ago
  • Compassion in World Farming is Hiring: Fundraising Manager (Poland)
    Manager/ka ds. Fundraisingu – Polska. Dowiedz się więcej i aplikuj: Manager/ka ds. Fundraisingu – Polska job - Polska, Zdalna praca - Compassion in World Farming. Miejsce pracy: Praca zdalna (wykonywana z lokalizacji w Polsce); wszyscy pracownicy są zobowiązani do uczestniczenia w spotkaniach zespołu i wydarzeniach organizowanych w naszym biurze w Warszawie – prawdopodobnie raz lub dwa razy...
    Animal Advocacy Forum | 5 days ago
  • Compassion in World Farming is Hiring: Specjalista/ka ds. Fundraisingu
    Specjalista/ka ds. Fundraisingu. Dowiedz się więcej i aplikuj: Specjalista/ka ds. Fundraisingu job - Polska, Zdalna praca - Compassion in World Farming. Miejsce pracy: Praca zdalna (wykonywana z lokalizacji w Polsce); wszyscy pracownicy są zobowiązani do uczestniczenia w spotkaniach zespołu i wydarzeniach organizowanych w naszym biurze w Warszawie – prawdopodobnie raz lub dwa razy w miesiącu.
    Animal Advocacy Forum | 5 days ago
  • Why Many EAs May Have More Impact Outside of Nonprofits in Animal Welfare
    Many thanks to @Felix_Werdermann 🔸 @Engin Arıkan and @Ana Barreiro for your feedback and comments on this, and for the encouragement from many people to finally write this up into an EA forum post. For years, much of the career advice in the Effective Altruism community has implicitly (or explicitly) suggested that impact = working at an EA nonprofit.
    Effective Altruism Forum | 5 days ago
  • Introducing the Inaugural Health Promotion & Disease Prevention (HP&DP) Bulletin Uganda
    The post Introducing the Inaugural Health Promotion & Disease Prevention (HP&DP) Bulletin Uganda appeared first on Living Goods.
    Living Goods | 5 days ago

Loading...

SEARCH

SUBSCRIBE

RSS Feed
X Feed

ORGANIZATIONS

  • Effective Altruism Forum
  • 80,000 Hours
  • Ada Lovelace Institute
  • Against Malaria Foundation
  • AI Alignment Forum
  • AI Futures Project
  • AI Impacts
  • AI Now Institute
  • AI Objectives Institute
  • AI Safety Camp
  • AI Safety Communications Centre
  • AidGrade
  • Albert Schweitzer Foundation
  • Aligned AI
  • ALLFED
  • Asterisk
  • altLabs
  • Ambitious Impact
  • Anima International
  • Animal Advocacy Africa
  • Animal Advocacy Careers
  • Animal Charity Evaluators
  • Animal Ethics
  • Apollo Academic Surveys
  • Aquatic Life Institute
  • Association for Long Term Existence and Resilience
  • Ayuda Efectiva
  • Berkeley Existential Risk Initiative
  • Bill & Melinda Gates Foundation
  • Bipartisan Commission on Biodefense
  • California YIMBY
  • Cambridge Existential Risks Initiative
  • Carnegie Corporation of New York
  • Center for Applied Rationality
  • Center for Election Science
  • Center for Emerging Risk Research
  • Center for Health Security
  • Center for Human-Compatible AI
  • Center for Long-Term Cybersecurity
  • Center for Open Science
  • Center for Reducing Suffering
  • Center for Security and Emerging Technology
  • Center for Space Governance
  • Center on Long-Term Risk
  • Centre for Effective Altruism
  • Centre for Enabling EA Learning and Research
  • Centre for the Governance of AI
  • Centre for the Study of Existential Risk
  • Centre of Excellence for Development Impact and Learning
  • Charity Entrepreneurship
  • Charity Science
  • Clearer Thinking
  • Compassion in World Farming
  • Convergence Analysis
  • Crustacean Compassion
  • Deep Mind
  • Democracy Defense Fund
  • Democracy Fund
  • Development Media International
  • EA Funds
  • Effective Altruism Cambridge
  • Effective altruism for Christians
  • Effective altruism for Jews
  • Effective Altruism Foundation
  • Effective Altruism UNSW
  • Effective Giving
  • Effective Institutions Project
  • Effective Self-Help
  • Effective Thesis
  • Effektiv-Spenden.org
  • Eleos AI
  • Eon V Labs
  • Epoch Blog
  • Equalize Health
  • Evidence Action
  • Family Empowerment Media
  • Faunalytics
  • Farmed Animal Funders
  • FAST | Animal Advocacy Forum
  • Felicifia
  • Fish Welfare Initiative
  • Fistula Foundation
  • Food Fortification Initiative
  • Foresight Institute
  • Forethought
  • Foundational Research Institute
  • Founders’ Pledge
  • Fortify Health
  • Fund for Alignment Research
  • Future Generations Commissioner for Wales
  • Future of Life Institute
  • Future of Humanity Institute
  • Future Perfect
  • GBS Switzerland
  • Georgetown University Initiative on Innovation, Development and Evaluation
  • GiveDirectly
  • GiveWell
  • Giving Green
  • Giving What We Can
  • Global Alliance for Improved Nutrition
  • Global Catastrophic Risk Institute
  • Global Challenges Foundation
  • Global Innovation Fund
  • Global Priorities Institute
  • Global Priorities Project
  • Global Zero
  • Good Food Institute
  • Good Judgment Inc
  • Good Technology Project
  • Good Ventures
  • Happier Lives Institute
  • Harvard College Effective Altruism
  • Healthier Hens
  • Helen Keller INTL
  • High Impact Athletes
  • HistPhil
  • Humane Slaughter Association
  • IDInsight
  • Impactful Government Careers
  • Innovations for Poverty Action
  • Institute for AI Policy and Strategy
  • Institute for Progress
  • International Initiative for Impact Evaluation
  • Invincible Wellbeing
  • Iodine Global Network
  • J-PAL
  • Jewish Effective Giving Initiative
  • Lead Exposure Elimination Project
  • Legal Priorities Project
  • LessWrong
  • Let’s Fund
  • Leverhulme Centre for the Future of Intelligence
  • Living Goods
  • Long Now Foundation
  • Machine Intelligence Research Institute
  • Malaria Consortium
  • Manifold Markets
  • Median Group
  • Mercy for Animals
  • Metaculus
  • Metaculus | News
  • METR
  • Mila
  • New Harvest
  • Nonlinear
  • Nuclear Threat Initiative
  • One Acre Fund
  • One for the World
  • OpenAI
  • Open Mined
  • Open Philanthropy
  • Organisation for the Prevention of Intense Suffering
  • Ought
  • Our World in Data
  • Oxford Prioritisation Project
  • Parallel Forecast
  • Ploughshares Fund
  • Precision Development
  • Probably Good
  • Pugwash Conferences on Science and World Affairs
  • Qualia Research Institute
  • Raising for Effective Giving
  • Redwood Research
  • Rethink Charity
  • Rethink Priorities
  • Riesgos Catastróficos Globales
  • Sanku – Project Healthy Children
  • Schmidt Futures
  • Sentience Institute
  • Sentience Politics
  • Seva Foundation
  • Sightsavers
  • Simon Institute for Longterm Governance
  • SoGive
  • Space Futures Initiative
  • Stanford Existential Risk Initiative
  • Swift Centre
  • The END Fund
  • The Future Society
  • The Life You Can Save
  • The Roots of Progress
  • Target Malaria
  • Training for Good
  • UK Office for AI
  • Unlimit Health
  • Utility Farm
  • Vegan Outreach
  • Venten | AI Safety for Latam
  • Village Enterprise
  • Waitlist Zero
  • War Prevention Initiative
  • Wave
  • Wellcome Trust
  • Wild Animal Initiative
  • Wild-Animal Suffering Research
  • Works in Progress

PEOPLE

  • Scott Aaronson | Shtetl-Optimized
  • Tom Adamczewski | Fragile Credences
  • Matthew Adelstein | Bentham's Newsletter
  • Matthew Adelstein | Controlled Opposition
  • Vaidehi Agarwalla | Vaidehi's Blog
  • James Aitchison | Philosophy and Ideas
  • Scott Alexander | Astral Codex Ten
  • Scott Alexander | Slate Star Codex
  • Scott Alexander | Slate Star Scratchpad
  • Alexanian & Franz | GCBR Organization Updates
  • Applied Divinity Studies
  • Leopold Aschenbrenner | For Our Posterity
  • Amanda Askell
  • Amanda Askell's Blog
  • Amanda Askell’s Substack
  • Atoms vs Bits
  • Connor Axiotes | Rules of the Game
  • Sofia Balderson | Hive
  • Mark Bao
  • Boaz Barak | Windows On Theory
  • Nathan Barnard | The Good blog
  • Matthew Barnett
  • Ollie Base | Base Rates
  • Simon Bazelon | Out of the Ordinary
  • Tobias Baumann | Cause Prioritization Research
  • Tobias Baumann | Reducing Risks of Future Suffering
  • Nora Belrose
  • Rob Bensinger | Nothing Is Mere
  • Alexander Berger | Marginal Change
  • Aaron Bergman | Aaron's Blog
  • Satvik Beri | Ars Arcana
  • Aveek Bhattacharya | Social Problems Are Like Maths
  • Michael Bitton | A Nice Place to Live
  • Liv Boeree
  • Dillon Bowen
  • Topher Brennan
  • Ozy Brennan | Thing of Things
  • Catherine Brewer | Catherine’s Blog
  • Stijn Bruers | The Rational Ethicist
  • Vitalik Buterin
  • Lynette Bye | EA Coaching
  • Ryan Carey
  • Joe Carlsmith
  • Lucius Caviola | Outpaced
  • Richard Yetter Chappell | Good Thoughts
  • Richard Yetter Chappell | Philosophy, Et Cetera
  • Paul Christiano | AI Alignment
  • Paul Christiano | Ordinary Ideas
  • Paul Christiano | Rational Altruist
  • Paul Christiano | Sideways View
  • Paul Christiano & Katja Grace | The Impact Purchase
  • Evelyn Ciara | Sunyshore
  • Cirrostratus Whispers
  • Jesse Clifton | Jesse’s Substack
  • Peter McCluskey | Bayesian Investor
  • Greg Colbourn | Greg's Substack
  • Ajeya Cotra & Kelsey Piper | Planned Obsolescence
  • Owen Cotton-Barrat | Strange Cities
  • Andrew Critch
  • Paul Crowley | Minds Aren’t Magic
  • Dale | Effective Differentials
  • Max Dalton | Custodienda
  • Saloni Dattani | Scientific Discovery
  • Amber Dawn | Contemplatonist
  • De novo
  • Michael Dello-Iacovo
  • Jai Dhyani | ANOIEAEIB
  • Michael Dickens | Philosophical Multicore
  • Dieleman & Zeijlmans | Effective Environmentalism
  • Anthony DiGiovanni | Ataraxia
  • Kate Donovan | Gruntled & Hinged
  • Dawn Drescher | Impartial Priorities
  • Eric Drexler | AI Prospects: Toward Global Goal Convergence
  • Holly Elmore
  • Sam Enright | The Fitzwilliam
  • Daniel Eth | Thinking of Utils
  • Daniel Filan
  • Lukas Finnveden
  • Diana Fleischman | Dianaverse
  • Diana Fleischman | Dissentient
  • Julia Galef
  • Ben Garfinkel | The Best that Can Happen
  • Matthew Gentzel | The Consequentialist
  • Aaron Gertler | Alpha Gamma
  • Rachel Glennerster
  • Sam Glover | Samstack
  • Andrés Gómez Emilsson | Qualia Computing
  • Nathan Goodman & Yusuf | Longterm Liberalism
  • Ozzie Gooen | The QURI Medley
  • Ozzie Gooen
  • Katja Grace | Meteuphoric
  • Katja Grace | World Spirit Sock Puppet
  • Katja Grace | Worldly Positions
  • Spencer Greenberg | Optimize Everything
  • Milan Griffes | Flight from Perfection
  • Simon Grimm
  • Zach Groff | The Groff Spot
  • Erich Grunewald
  • Marc Gunther
  • Cate Hall | Useful Fictions
  • Chris Hallquist | The Uncredible Hallq
  • Topher Hallquist
  • John Halstead
  • Finn Hambly
  • James Harris | But Can They Suffer?
  • Riley Harris
  • Riley Harris | Million Year View
  • Peter Hartree
  • Shakeel Hashim | Transformer
  • Sarah Hastings-Woodhouse
  • Hayden | Ethical Haydonism
  • Julian Hazell | Julian’s Blog
  • Julian Hazell | Secret Third Thing
  • Jiwoon Hwang
  • Incremental Updates
  • Roxanne Heston | Fire and Pulse
  • Hauke Hillebrandt | Hauke’s Blog
  • Ben Hoffman | Compass Rose
  • Michael Huemer | Fake Nous
  • Tyler John | Secular Mornings
  • Mike Johnson | Open Theory
  • Toby Jolly | Seeking To Be Jolly
  • Holden Karnofsky | Cold Takes
  • Jeff Kaufman
  • Cullen O'Keefe | Jural Networks
  • Daniel Kestenholz
  • Ketchup Duck
  • Oliver Kim | Global Developments
  • Petra Kosonen | Tiny Points of Vast Value
  • Victoria Krakovna
  • Ben Kuhn
  • Alex Lawsen | Speculative Decoding
  • Gavin Leech | argmin gravitas
  • Howie Lempel
  • Gregory Lewis
  • Eli Lifland | Foxy Scout
  • Rhys Lindmark
  • Robert Long | Experience Machines
  • Garrison Lovely | Garrison's Substack
  • MacAskill, Chappell & Meissner | Utilitarianism
  • Jack Malde | The Ethical Economist
  • Jonathan Mann | Abstraction
  • Sydney Martin | Birthday Challenge
  • Daniel May
  • Daniel May
  • Conor McCammon | Utopianish
  • Peter McIntyre
  • Peter McIntyre | Conceptually
  • Pablo Melchor
  • Pablo Melchor | Altruismo racional
  • Geoffrey Miller
  • Fin Moorhouse
  • Luke Muehlhauser
  • Neel Nanda
  • David Nash | Global Development & Economic Advancement
  • David Nash | Monthly Overload of Effective Altruism
  • Eric Neyman | Unexpected Values
  • Richard Ngo | Narrative Ark
  • Richard Ngo | Thinking Complete
  • Elizabeth Van Nostrand | Aceso Under Glass
  • Oesterheld, Treutlein & Kokotajlo | The Universe from an Intentional Stance
  • James Ozden | Understanding Social Change
  • Daniel Paleka | AI Safety Takes
  • Ives Parr | Parrhesia
  • Dwarkesh Patel | The Lunar Society
  • Kelsey Piper | The Unit of Caring
  • Michael Plant | Planting Happiness
  • Michal Pokorný | Agenty Dragon
  • Georgia Ray | Eukaryote Writes Blog
  • Ross Rheingans-Yoo | Icosian Reflections
  • Josh Richards | Tofu Ramble
  • Jess Riedel | foreXiv
  • Anna Riedl
  • Hannah Ritchie | Sustainability by Numbers
  • David Roodman
  • Eli Rose
  • Siebe Rozendal
  • John Salvatier
  • Anders Sandberg | Andart II
  • William Saunders
  • Joey Savoie | Measured Life
  • Stefan Schubert
  • Stefan Schubert | Philosophical Sketches
  • Stefan Schubert | Stefan’s Substack
  • Nuño Sempere | Measure is unceasing
  • Harish Sethu | Counting Animals
  • Rohin Shah
  • Zeke Sherman | Bashi-Bazuk
  • Buck Shlegeris
  • Jay Shooster | jayforjustice
  • Carl Shulman | Reflective Disequilibrium
  • Jonah Sinick
  • Andrew Snyder-Beattie | Defenses in Depth
  • Ben Snodin
  • Nate Soares | Minding Our Way
  • Kaj Sotala
  • Tom Stafford | Reasonable People
  • Pablo Stafforini | Pablo’s Miscellany
  • Henry Stanley
  • Jacob Steinhardt | Bounded Regret
  • Zach Stein-Perlman | AI Lab Watch
  • Romeo Stevens | Neurotic Gradient Descent
  • Michael Story | Too long to tweet
  • Maxwell Tabarrok | Maximum Progress
  • Max Taylor | AI for Animals
  • Benjamin Todd
  • Brian Tomasik | Reducing Suffering
  • Helen Toner | Rising Tide
  • Philip Trammell
  • Jacob Trefethen
  • Toby Tremlett | Raising Dust
  • Alex Turner | The Pond
  • Alyssa Vance | Rationalist Conspiracy
  • Magnus Vinding
  • Eva Vivalt
  • Jackson Wagner
  • Ben West | Philosophy for Programmers
  • Nick Whitaker | High Modernism
  • Jess Whittlestone
  • Peter Wildeford | Everyday Utilitarian
  • Peter Wildeford | Not Quite a Blog
  • Peter Wildeford | Pasteur’s Cube
  • Peter Wildeford | The Power Law
  • Bridget Williams
  • Ben Williamson
  • Julia Wise | Giving Gladly
  • Julia Wise | Otherwise
  • Julia Wise | The Whole Sky
  • Kat Woods
  • Thomas Woodside
  • Mark Xu | Artificially Intelligent
  • Linch Zhang
  • Anisha Zaveri

NEWSLETTERS

  • AI Safety Newsletter
  • Alignment Newsletter
  • Altruismo eficaz
  • Animal Ask’s Newsletter
  • Apart Research
  • Astera Institute | Human Readable
  • Campbell Collaboration Newsletter
  • CAS Newsletter
  • ChinAI Newsletter
  • CSaP Newsletter
  • EA Forum Digest
  • EA Groups Newsletter
  • EA & LW Forum Weekly Summaries
  • EA UK Newsletter
  • EACN Newsletter
  • Effective Altruism Lowdown
  • Effective Altruism Newsletter
  • Effective Altruism Switzerland
  • Epoch Newsletter
  • Farm Animal Welfare Newsletter
  • Existential Risk Observatory Newsletter
  • Forecasting Newsletter
  • Forethought
  • Future Matters
  • GCR Policy’s Newsletter
  • Gwern Newsletter
  • High Impact Policy Review
  • Impactful Animal Advocacy Community Newsletter
  • Import AI
  • IPA’s weekly links
  • Manifold Markets | Above the Fold
  • Manifund | The Fox Says
  • Matt’s Thoughts In Between
  • Metaculus – Medium
  • ML Safety Newsletter
  • Open Philanthropy Farm Animal Welfare Newsletter
  • Oxford Internet Institute
  • Palladium Magazine Newsletter
  • Policy.ai
  • Predictions
  • Rational Newsletter
  • Sentinel
  • SimonM’s Newsletter
  • The Anti-Apocalyptus Newsletter
  • The EA Behavioral Science Newsletter
  • The EU AI Act Newsletter
  • The EuropeanAI Newsletter
  • The Long-termist’s Field Guide
  • The Works in Progress Newsletter
  • This week in security
  • TLDR AI
  • Tlön newsletter
  • To Be Decided Newsletter

PODCASTS

  • 80,000 Hours Podcast
  • Alignment Newsletter Podcast
  • AI X-risk Research Podcast
  • Altruismo Eficaz
  • CGD Podcast
  • Conversations from the Pale Blue Dot
  • Doing Good Better
  • Effective Altruism Forum Podcast
  • ForeCast
  • Found in the Struce
  • Future of Life Institute Podcast
  • Future Perfect Podcast
  • GiveWell Podcast
  • Gutes Einfach Tun
  • Hear This Idea
  • Invincible Wellbeing
  • Joe Carlsmith Audio
  • Morality Is Hard
  • Open Model Project
  • Press the Button
  • Rationally Speaking Podcast
  • The End of the World
  • The Filan Cabinet
  • The Foresight Institute Podcast
  • The Giving What We Can Podcast
  • The Good Pod
  • The Inside View
  • The Life You Can Save
  • The Rhys Show
  • The Turing Test
  • Two Persons Three Reasons
  • Un equilibrio inadecuado
  • Utilitarian Podcast
  • Wildness

VIDEOS

  • 80,000 Hours: Cambridge
  • A Happier World
  • AI In Context
  • AI Safety Talks
  • Altruismo Eficaz
  • Altruísmo Eficaz
  • Altruísmo Eficaz Brasil
  • Brian Tomasik
  • Carrick Flynn for Oregon
  • Center for AI Safety
  • Center for Security and Emerging Technology
  • Centre for Effective Altruism
  • Center on Long-Term Risk
  • Chana Messinger
  • Deliberation Under Ideal Conditions
  • EA for Christians
  • Effective Altruism at UT Austin
  • Effective Altruism Global
  • Future of Humanity Institute
  • Future of Life Institute
  • Giving What We Can
  • Global Priorities Institute
  • Harvard Effective Altruism
  • Leveller
  • Machine Intelligence Research Institute
  • Metaculus
  • Ozzie Gooen
  • Qualia Research Institute
  • Raising for Effective Giving
  • Rational Animations
  • Robert Miles
  • Rootclaim
  • School of Thinking
  • The Flares
  • EA Groups
  • EA Tweets
  • EA Videos
Inactive websites grayed out.
Effective Altruism News is a side project of Tlön.