roughly 13 hours ago

I’m more optimistic about the possibility of beneficial AGI in general than most folks, I think, but something that caught me in the article was the recourse to mammalian sociality to (effectively) advocate for compassion as an emergent quality of intelligence.

A known phenomenon among sociologists is that, while people may be compassionate, when you collect them into a superorganism like a corporation, army, or nation, they will by and large behave and make decisions according to the moral and ideological landscape that superorganism finds itself in. Nobody rational would kill another person for no reason, but a soldier will bomb a village for the sake of their nation’s geostrategic position. Nobody would throw someone out of their home or deny another person lifesaving medicine, but as a bank officer or an insurance agent, they make a living doing these things and sleep untroubled at night. A CEO will lay off 30,000 people - an entire small city cast off into an uncaring market - with all the introspection of a Mongol chieftain subjugating a city (and probably less emotion). Humans may be compassionate, but employees, soldiers, and politicians are not, even though at a glance they’re made of the same stuff.

That’s all to say that to just wave generally in the direction of mammalian compassion and say “of course a superintelligence will be compassionate” is to abdicate our responsibility for raising our cognitive children in an environment that rewards the morals we want them to have, which is emphatically not what we’re currently doing for the collective intelligences we’ve already created.

  • dataflow 12 hours ago

    > Nobody rational would kill another person for no reason, but a soldier will bomb a village for the sake of their nation’s geostrategic position.

    I think you're forgetting to control for the fact that the former would be severely punished for doing so, and the latter would be severely punished for not doing so?

    > Nobody would throw someone out of their home or deny another person lifesaving medicine, but as a bank officer or an insurance agent, they make a living doing these things and sleep untroubled at night.

    Again, you're forgetting to control for other variables. What if you paid them equally to do the same things?

    • Eddy_Viscosity2 2 hours ago

      > I think you're forgetting to control for the fact that the former would be severely punished for doing so, and the latter would be severely punished for not doing so? > What if you paid them equally to do the same things?

      I think the larger point is that rewarding bombing, or paying bank officers to evict people from their homes is how the superorganism functions. Your counter examples are like saying 'what if fire was cold instead of hot', well then it wouldn't be fire anymore.

      • master_crab 43 minutes ago

        To use a quote from of those large corporate leaders (Warren Buffett & Charlie Munger):

        "Show me the incentive and I will show you the outcome"

      • AndrewKemendo 44 minutes ago

        There is no human superoganism, and the reason we’re doomed as a temporary species is precisely that humans cannot act eusocially as a superorganism.

        By your definition the Moscow Metallica show, Jan 6th riots, etc… were superorganisms and that’s not even barely applicable

        Humans expressing group behaviors at some trivial number for a trivial period (<1M people for <2 days is the largest sustained group activity I’m aware of) is the equivalent of a locust swarm not even close to a superorganism

    • achierius 11 hours ago

      Why should you "control" for these variables? AIs will effectively be punished for doing various inscrutable things by their own internal preferences.

  • adamisom 13 hours ago

    a CEO laying off 3% scales in absolute numbers as the company grows

    should, therefore, large companies, even ones that succeed largely in a clean way by just being better at delivering what that business niche exists for, be made to never grow too big, in order to avoid impacting very many people? keep in mind that people engage in voluntary business transactions because they want to be impacted (positively—but not every impact can be positive, in any real world)

    what if its less efficient substitutes collectively lay off 4%, but the greater layoffs are hidden (simply because it's not a single employer doing it which may be more obvious)?

    to an extent, a larger population inevitably means that larger absolute numbers of people will be affected by...anything

    • schmidtleonard 13 hours ago

      > voluntary business transactions

      The evil parts are hid in property rights which are not voluntary.

      > made to never grow too big, in order to avoid impacting very many people

      Consolidated property rights have more power against their counterparties, that's why businesses love merging so much.

      Look at your tax return. Do you make more money from what you do or what you own? If you make money from what you do, you're a counterparty and you should probably want to tap the brakes on the party.

      • nradov 12 hours ago

        What are the evil parts, exactly? When property can't be privately owned with strong rights the effectively the government owns everything. That inevitably leads to poverty, often followed by famine and/or genocide.

        • Retric 12 hours ago

          Plenty of examples on both sides of that, even in the US there’s vast swaths of land that can’t be privately owned for example try and buy a navigable river or land below the ordinary high water mark etc. https://en.wikipedia.org/wiki/Navigable_servitude Similarly eminent domain severely limits the meaning of private land ownership in the US.

          The most extreme capitalist societies free from government control of resources like say Kowloon Walled City are generally horrible places to live.

        • Eddy_Viscosity2 2 hours ago

          Why is it that property is taxed less than productive work? Someone sitting on their ass doing nothing but sucking resources through dividend payments has that income taxed less than the workers' income who did the work that generated those dividends. Why isn't the reverse the case? Heavily tax passive income, and lightly tax active income. Incentivize productive activity and penalize rent-seeking parasites?

        • jonah 12 hours ago

          Places with predominantly private ownership can be and are prone to famine, and/oril genocide, etc. as well.

          • hunterpayne 11 hours ago

            Sure, but those are the exceptions that prove the rule. Centralized (Marxism and its descendants) societies tend to have those things happens the majority of the time. In decentralized capitalist societies, they happened once a long time ago and we took steps for them to not happen again. Seems like a flaw in those societies is that when these problems happen so infrequently people forget and then you get takes like this.

            • piva00 4 hours ago

              Centralised planning is not what Marxism is about though, Marxism is about class struggle and the abolishment of a capital-owning class, distributing the fruits of labour to the labourers.

              In that definition it's even more decentralised than capitalism which has inherent incentives for the accumulation of capital into monopolies, since those are the best profit-generating structures, only external forces from capitalism can reign into that like governments enforcing anti-trust/anti-competitive laws to control the natural tendency of monopolisation.

              If the means of production were owned by labourers (not through the central government) it could be possible to see much more decentralisation than the current trend from the past 40 years of corporate consolidation.

              The centralisation is already happening under capitalism.

              • AndrewKemendo 41 minutes ago

                Unfortunately you’re taking to the void here

                People can’t differentiate between what Marx wrote and what classic dictators (Lenin, Stalin, Mao) did under some retcon “Marxist” banner

    • rafabulsing 13 hours ago

      I think it's reasonable that bigger companies are under more scrutiny and stricter constraints than smaller companies, yeah.

      Keeps actors with more potential for damaging society in check, while not laying a huge burden on small companies which have less resources to spend away from their core business.

    • roughly 12 hours ago

      Indeed, by what moral justification does one slow the wheels of commerce, no matter how many people they run over?

  • parineum 13 hours ago

    > Nobody would throw someone out of their home or deny another person lifesaving medicine

    Individuals with rental properties and surgeons do this every day.

    • margalabargala 13 hours ago

      Quibble, surgeons are not the ones doing this. Surgeons' schedules are generally permanently full. They do not typically deny people lifesaving medicine, on the contrary they spend all of their time providing lifesaving medicine.

      The administrators who create the schedule for the surgeons, are the one denying lifesaving care to people.

      • schmidtleonard 13 hours ago

        Triage, whether by overworked nurses or by auction or by private death panel or by public death panel, is not necessarily a problem created by administrators. It can be created by having too few surgeons, in which case whatever caused that (in a time of peace, no less) is at fault. Last I heard it was the doctor's guild lobbying for a severe crimp on their training pipeline, in which case blame flows back to some combination of doctors and legislators.

        • nradov 12 hours ago

          You heard wrong. While at one point the AMA lobbied Congress to restrict residency slots, they reversed position some years back. However Congress has still refused to increase Medicare funding for residency programs. This is essentially a form of care rationing imposed through supply shortages.

          https://savegme.org/

          There is no "doctor's guild". No one is required to join the AMA to practice medicine, nor are they involved in medical school accreditation.

        • margalabargala 13 hours ago

          I'm not even talking about triage. It's not a matter of who has the worst problem, it's about which patient the nurses deliver to the surgeon and anesthesiologist. Literally just who gets scheduled and when.

      • xboxnolifes 7 hours ago

        If all of the surgeons' schedules are full, the administrators are as innocent as the surgeons.

        • margalabargala 18 minutes ago

          If the surgeons are busy each day, that removes all responsibility for who gets added to their schedule 3 months in advance? Please elaborate.

      • parineum 13 hours ago

        Surely they could volunteer to do some charity surgery in their own time. They aren't slaves.

        • icehawk 13 hours ago

          Sure! They can volunteer:

          - Their skills.

          - Their time.

          - The required materials to properly perform the surgery.

          They can't volunteer:

          - The support staff around them required to do surgery.

          - The space to do the surgery.

          Surgery isn't a one-man show.

          What did you mean by "Surely they could volunteer to do some charity surgery in their own time. They aren't slaves?"

          • parineum 12 hours ago

            There are a lot of individuals who have the ability to provide those resources.

            Even if that's a bad example, there are innumerable examples where individuals do choose not to help others in the same way that corporations don't.

            Frankly, nearly every individual is doing that by not volunteering every single extra dollar and minute they don't need to survive.

        • margalabargala 13 hours ago

          Not really, because surgeons require operaing rooms and support staff and equipment to do what they do, all of which are controlled bybthe aforementioned hospital administrators.

    • card_zero 13 hours ago

      Yeah, it's the natural empathy myth. Somebody totally would kill somebody else for some reason. It's not inherent to being human that you're unable to be steely-hearted and carry out a range of actions we might classify as "mean" - and those mean actions can have reasons behind them.

      So, OK, abdication of responsibility to a collective is a thing. Just following orders. So what? Not relevant to AGI.

      Oh wait, this is about "superintelligence", whatever that is. All bets are off, then.

      • NoMoreNicksLeft 11 hours ago

        The superintelligence might decide based on things only it can understand that the existence of humans prevents some far future circumstance where even more "good" exists in the universe. When it orders you to toss the babies into the baby-stomping machine, perhaps you should consider doing so based on the faith in its superintelligence that we're supposed to have.

        Human beings aren't even an intelligent species, not at the individual level. When you have a tribe of human beings numbering in the low hundreds, practically none of them need to be intelligent at all. They need to be social. Only one or two need to be intelligent. That one can invent microwave ovens and The Clapper™, and the rest though completely mentally retarded can still use those things. Intelligence is metabolically expensive, after all. And if you think I'm wrong, you're just not one of the 1-in-200 that are the intelligent individuals.

        I've yet to read the writings of anyone who can actually speculate intelligently on artificial intelligence, let alone meet such a person. The only thing we have going for us as a species is that, to a large degree, none of you are intelligent enough to ever deduce the principles of intelligence. And god help us if the few exceptional people out there get a wild bug up their ass to do so. There will just be some morning where none of us wake up, and the few people in the time zone where they're already awake will experience several minutes of absolute confusion and terror.

  • inkyoto 13 hours ago

    I would argue that corporate actors (a state, an army or a corporation) are not true superorganisms but are semi-autonomous, field-embedded systems that can exhibit super-organism properties, with their autonomy being conditional, relational and bounded by the institutional logics and resource structures of their respective organisational fields. As the history of humanity has shown multiple times, such semi-autonomous with super-organism properties have a finite lifespan and are incapable of evolving their own – or on their own – qualitatively new or distinct, form of intelligence.

    The principal deficiency in our discourse surrounding AGI lies in the profoundly myopic lens through which we insist upon defining it – that of human cognition. Such anthropocentric conceit renders our conceptual framework not only narrow but perilously misleading. We have, at best, a rudimentary grasp of non-human intelligences – biological or otherwise. The cognitive architectures of dolphins, cephalopods, corvids, and eusocial insects remain only partially deciphered, their faculties alien yet tantalisingly proximate. If we falter even in parsing the intelligences that share our biosphere, then our posturing over extra-terrestrial or synthetic cognition becomes little more than speculative hubris.

    Should we entertain the hypothesis that intelligence – in forms unshackled from terrestrial evolution – has emerged elsewhere in the cosmos, the most sober assertion we can offer is this: such intelligence would not be us. Any attempt to project shared moral axioms, epistemologies or even perceptual priors is little more than a comforting delusion. Indeed, hard core science fiction – that last refuge of disciplined imagination – has long explored the unnerving proposition of encountering a cognitive order so radically alien that mutual comprehension would be impossible, and moral compatibility laughable.

    One must then ponder – if the only mirror we possess is a cracked one, what image of intelligence do we truly see reflected in the machine? A familiar ghost, or merely our ignorance, automated?

    • Animats 12 hours ago

      > I would argue that corporate actors (a state, an army or a corporation) are not true superorganisms but are semi-autonomous, field-embedded systems that can exhibit super-organism properties, with their autonomy being conditional, relational and bounded by the institutional logics and resource structures of their respective organisational fields.

      Lotsa big words there.

      Really, though, we're probably going to have AI-like things that run substantial parts of for-profit corporations. As soon as AI-like things are better at this than humans, capitalism will force them to be in charge. Companies that don't do this lose.

      There's a school of thought, going back to Milton Friedman, that corporations have no responsibilities to society.[1] Their goal is to optimize for shareholder value. We can expect to see AI-like things which align with that value system.

      And that's how AI will take over. Shareholder value!

      [1] https://www.nytimes.com/1970/09/13/archives/a-friedman-doctr...

      • SoftTalker 11 hours ago

        That assumes that consumers will just accept it. I would not do business with an AI company, just as I don’t listen to AI music, view AI pictures or video, or read AI writings. At least not knowingly.

        • Jensson 9 hours ago

          People would absolutely buy AI farmed meat or vegetables if they were 10% cheaper. The number of people who pay a premium depending on production method is a small minority.

        • AndrewKemendo 40 minutes ago

          As long as you stay inside capitalism you unquestionably and unequivocally do business with an AI company.

      • halfcat 10 hours ago

        Costs will go down. But so will revenue, as fewer customers have an income because a different company also cut costs.

        Record profits. Right up until the train goes off a cliff.

  • goatlover 13 hours ago

    Also sociopaths are more capable of doing those things while pretending they are empathetic and moral to get positions of power or access to victims. We know a certain percentage of human mammals have sociopathic or narcissistic tendencies, not just misaligned groups of humans that they might take advantage of by becoming a cult leader or war lord or president.

  • watwut 7 hours ago

    > soldier will bomb a village for the sake of their nation’s geostrategic position.

    Soldier does that to please the captain, to look manly and tough to peers, to feel powerful. Or to fulfill a duty - moral mandate on itself. Or out of hate, because soldiers are often made to hate the ennemies.

    > Nobody would throw someone out of their home or deny another person lifesaving medicine

    They totally would. Trump would do it for pleasure of it. Project 2025 authors would so it happily and sees the rest of us as wuss. If you listen to right wing rhetorics and look at voters, many people will hapilly do just that.

jandrewrogers 13 hours ago

I’ve known both Ben and Eliezer since the 1990s and enjoyed the arguments. Back then I was doing serious AI research along the same lines as Marcus Hutter and Shane Legg, which had a strong basis in algorithmic information theory.

While I have significant concerns about AGI, I largely reject both Eliezer’s and Ben’s models of where the risks are. It is important to avoid the one-dimensional “two faction” model that dominates the discourse because it really doesn’t apply to complex high-dimensionality domains like AGI risk.

IMO, the main argument against Eliezer’s perspective is that it relies pervasively on a “spherical cow on a frictionless plane” model of computational systems. It is fundamentally mathematical, it does not concern itself with the physical limitations of computational systems in our universe. If you apply a computational physics lens then many of the assumptions don’t hold up. There is a lot of “and then something impossible happens based on known physics” buried in the assumptions that have never been addressed.

That said, I think Eliezer’s notion that AGI fundamentally will be weakly wired to human moral norms is directionally correct.

Most of my criticism of Ben’s perspective is against the idea that some kind of emergent morality that we would recognize is a likely outcome based on biological experience. The patterns of all biology emerged in a single evolutionary context. There is no reason to expect those patterns to be hardwired into an AGI that developed along a completely independent path. AGI may be created by humans but their nature isn’t hardwired by human evolution.

My own hypothesis is that AGI, such as it is, will largely reflect the biases of the humans that built it but will not have the biological constraints on expression implied by such programming in humans. That is what the real arms race is about.

But that is just my opinion.

  • svsoc 12 hours ago

    Can you give concrete examples of "something impossible happens based on known physics"? I have followed the AI debate for a long time but I can't think of what those might be.

    • jandrewrogers 12 hours ago

      Optimal learning is an interesting problem in computer science because it is fundamentally bound by geometric space complexity rather than computational complexity. You can bend the curve but the approximations degrade rapidly and still have a prohibitively expensive exponential space complexity. We have literature for this; a lot of the algorithmic information theory work in AI was about characterizing these limits.

      The annoying property of prohibitively exponential (ignoring geometric) space complexity is that it places a severe bound on computational complexity per unit time. The exponentially increasing space implies an increase in latency for each sequentially dependent operation, bounded at the limit by the speed of light. Even if you can afford the insane space requirements, your computation can’t afford the aggregate latency for anything useful even for the most trivial problems. With highly parallel architectures this can be turned into a latency-hiding problem to some extent but this also has limits.

      This was thoroughly studied by the US defense community decades ago.

      The tl;dr is that efficient learning scales extremely poorly, more poorly than I think people intuit. All of the super-intelligence hard-takeoff scenarios? Not going to happen, you can’t make the physics work without positing magic that circumvents the reality of latencies when your state space is unfathomably large even with unimaginably efficient computers.

      I harbor a suspicion that the cost of this scaling problem, and the limitations of wetware, has bounded intelligence in biological systems. We can probably do better in silicon than wetware in some important ways but there is not enough intrinsic parallelism in the computation to adequately hide the latency.

      Personally, I find these “fundamental limits of computation” things to be extremely fascinating.

      • hunterpayne 10 hours ago

        So I studied Machine Learning too. One of the main things I learned is that for any problem there is an ideally sized model that when trained will produce the lowest error rate. Now, when you do multi-class learning (training a model for multiple problems), that ideally sized model is larger but there is still an optimum sized model. Seems to me that for GAI, there will also be an ideally sized model. I wouldn't be surprised if the complexity of that model was very similar to the size of the human brain. If that is the case, then some sort of super-intelligence isn't possible in any meaningful way. This would seem to track with what we are seeing in the today's LLMs. When they build bigger models, they often don't perform as well as the previous one which perhaps was at some maximum/ideal complexity. I suspect, we will continue to run into this barrier over and over again.

        • czl 6 hours ago

          > for any problem there is an ideally sized model that when trained will produce the lowest error rate.

          You studied ML before discovery of "double descent"?

          https://youtu.be/z64a7USuGX0

      • davidivadavid 10 hours ago

        Any reference material (papers/textbooks) on that topic? It does sound fun.

    • adastra22 11 hours ago

      Not the person you are responding to, but much of the conclusions drawn by Bostrom (and most of EY’s ideas are credited to Bostrom) depend on infinities. The orthogonality thesis being series from AIXI, for example.

      EY’s assertions regarding a fast “FOOM” have been empirically discredited by the very fact that ChatGPT was created in 2022, it is now 2025, and we still exist. But goal posts are moved. Even ignoring that error, the logic is based on, essentially, “AI is a magic box that can solve any problem by thought alone.” If you can define a problem, the AI can solve it. This is part of the analysis done by AI x-risk people of the MIRI tradition. Which ignores entirely that there are very many problems (including AI recursive improvement itself) which are computationally infeasible to solve in this way, no matter how “smart” you are.

drivebyhooting 14 hours ago

Many of us on HN are beneficiaries of the standing world order and American hegemony.

I see the developments in LLMs not as getting us close to AGI, but more as destabilizing the status quo and potentially handing control of the future to a handful of companies rather than securing it in the hands of people. It is an acceleration of the already incipient decay.

  • pols45 13 hours ago

    It is not decay. People are just more conscious than previous generations ever were about how the world works. And that leads to confusion and misunderstandings if they are only exposed to herd think.

    The chicken doesn't understand it has to lay a certain number of eggs a day to be kept alive in the farm. It hits its metrics because it has been programmed to hit them.

    But once it gets access to chatgpt and develops consciousness of how the farm works, the questions it asks slowly evolve with time.

    Initially its all fear driven - how do we get a say in how many eggs we need to lay to be kept alive? How do we keep the farm running without relying on the farmer? etc etc

    Once the farm animals begins to realize the absurdity of such questions, new questions emerge - how come the crow is not a farm animal? why is the shark not used as a circus animal? etc etc

    And thro that process, whose steps cannot be skipped the farm animal begins to realize certain things about itself which no one, especially the farmer, has any incentive of encouraging.

    • binary132 4 hours ago

      truly, nobody ever asked such questions until they had access to the world’s most sycophantic dumb answer generating machine

    • hunterpayne 10 hours ago

      Ideology is a -10 modifier on Intelligence

      • dns_snek 7 hours ago

        Are you implying that there are people who don't have ideology or that they're somehow capable of reasoning and acting independently of their ideology?

  • steve_adams_86 14 hours ago

    I agree. You wouldn't see incredibly powerful and wealthy people frothing at the mouth to build this technology if that wasn't true, in my opinion.

    • goatlover 13 hours ago

      People who like Curtis Yarvin's ramblings.

      • jerf 12 hours ago

        No one needs Curtis Yarvin, or any other commentator of any political stripe, to tell them that they'd like more money and power, and that they'd like to get it before someone else locks it in.

        We should be so lucky as to only have to worry about one particular commentator's audience.

  • IAmGraydon 13 hours ago

    Are you seeing a moat develop around LLMs, indicating that only a small number of companies will control it? I'm not. It seems that there's nearly no moat at all.

    • bayarearefugee 12 hours ago

      I am also not seeing a moat on LLMs.

      It seems like the equilibrium point for them a few years out will be that most people will be able to run good enough LLMs on local hardware through a combination of the fact that they don't seem to be getting much better due to input data exhaustion while various forms of optimization seem to be increasingly allowing them to run on lesser hardware.

      But I still have generalized lurking amorphous concerns about where this all ends up because a number of actors in the space are certainly spending as if they believe a moat will magically materialize or can be constructed.

    • drivebyhooting 12 hours ago

      The moat is around capital. For thousands of years most people were slaves or peasants whose cheap fungible labor was exploited.

      For a brief period intellectual and skilled work has (had?) been valued and compensated, giving rise to a somewhat wealthy and empowered middle class. I fear those days are numbered and we’re poised to return to feudalism.

      What is more likely, that LLMs lead to the flourishing of entrepreneurship and self determination? Or burgeoning of precariat gig workers barely hanging on? If we’re speaking of extremes, I find the latter far more likely.

      • stale2002 8 hours ago

        > The moat is around capital.

        Not really. I can run some pretty good models on my high end gaming PC. Sure, I can't train them. But I don't need to. All that has to happen is at least one group releases a frontier model open source and the world is good to go, no feudalism needed.

        > What is more likely, that LLMs lead to the flourishing of entrepreneurship and self determination

        I'd say whats more likely is that whatever we are seeing now continues. And that current day situation is a massive startup boom run on open source models that are nearly as good as the private ones while GPUs are being widely distributed.

    • CamperBob2 12 hours ago

      LLMs as we know them have no real moat, but few people genuinely believe that LLMs are sufficient as a platform for AGI. Whatever it takes to add object permanence and long-term memory assimilation to LLMs may not be so easy to run on your 4090 at home.

      • czl 6 hours ago

        > Whatever it takes to add object permanence and long-term memory assimilation to LLMs may not be so easy to run on your 4090 at home.

        Today yes but extrapolate GPU/NPU/CPU improvement by a decade.

  • voidfunc 13 hours ago

    Im pretty skeptical "the people" are smart enough to control their own destiny anymore. We've deprioritized education wo heavily in the US that it may be better to have a ruling class of corporations and elites. At least you know where things stand and how they'll operate.

    • roughly 13 hours ago

      > it may be better to have a ruling class of corporations and elites.

      Given that the outcome of that so far has been to deprioritize education so heavily in the US that one becomes skeptical that the people are smart enough to control their own destiny anymore while simultaneously shoving the planet towards environmental calamity, I’m not sure doubling down on the strategy is the best bet.

    • idle_zealot 13 hours ago

      Or we could, you know, prioritize education.

  • roenxi 13 hours ago

    The standing world order is already dead since well before AI, it ended back in 2010s in terms of when the US had an opportunity to maybe resist change and we're just watching the inevitable consequences play out. They no longer have the economic weight to maintain control over Asia even assuming China is overstating their income by 2x. The Ukraine war has been a bloodier path than we needed to travel to make the point, but if they can't coerce Russia there is an open question of who they can, Russia isn't a particularly impressive power.

    With that backdrop it is hard to see what impact AI is supposed to make to people who are reliant on US hegemony. They probably want to find something reliable to rely on already.

JSR_FDED 14 hours ago

> In theory, yes, you could pair an arbitrarily intelligent mind with an arbitrarily stupid value system. But in practice, certain kinds of minds naturally develop certain kinds of value systems.

If this is meant to counter the “AGI will kill us all” narrative, I am not at all reassured.

>There’s deep intertwining between intelligence and values—we even see it in LLMs already, to a limited extent. The fact that we can meaningfully influence their behavior through training hints that value learning is tractable, even for these fairly limited sub-AGI systems.

Again, not reassuring at all.

  • lll-o-lll 14 hours ago

    > There’s deep intertwining between intelligence and values—we even see it in LLMs already

    I’ve seen this repeated quite a bit, but it’s simply unsupported by evidence. It’s not as if this hasn’t been studied! There’s no correlation between intelligence and values, or empathy for that matter. Good people do good things, you aren’t intrinsically “better” because of your IQ.

    Standard nerd hubris.

    • JumpCrisscross 13 hours ago

      > There’s no correlation between intelligence and values

      Source? (Given values and intelligence are moving targets, it seems improbable one could measure one versus another without making the whole exercise subjective.)

      • mitthrowaway2 11 hours ago

        Assuming you take intelligence to mean something like "the ability to make accurate judgements on matters of fact, accurate predictions of the future, and select courses of action that achieve one's goals or maximize one's objective function", then this is essentially another form of the Is-Ought problem derived by Hume: https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem

    • lukeschlather 13 hours ago

      I think you're confusing "more intelligence means you have to have more values" with "more intelligence means you have to have morally superior values."

      The point is, you're unlikely to have a system that starts out with the goal of making paperclips and ends with the goal of killing all humans. You're going to have to deliberately program the AI with a variety of undesirable values in order for it to arrive in a state where it is suited for killing all humans. You're going to have to deliberately train it to lie, to be greedy, to hide things from us, to look for ways to amass power without attracting attention. These are all hard problems and they require not just intelligence but that the system has very strong values - values that most people would consider evil.

      If, on the other hand, you're training the AI to have empathy, to tell the truth, to try and help when possible, to avoid misleading you, it's going to be hard to accidentally train it to do the opposite.

      • rafabulsing 13 hours ago

        > You're going to have to deliberately train it to lie, to be greedy, to hide things from us, to look for ways to amass power without attracting attention.

        No, that's the problem. You don't have to deliberately train that in.

        Pretty much any goal that you train the AI to achieve, once it gets smart enough, it will recognize that lying, hiding information, manipulating and being deceptive are all very useful instruments for achieving that goal.

        So you don't need to tell it that: if it's intelligent, it's going to reach that conclusion by itself. No one tells children that they should lie either, and they all seem to discover that strategy sooner or later.

        So you are right that you have to deliberately train it away from using those strategies, by being truthful, empathetic, honest, etc. The issue is that those are ill defined goals. Philosophers have being arguing about what's true and what's good since philosophy first was a thing. Since we can barely find those answers to ourselves, it's a hard chance that we'll be able to perfectly impart them onto AIs. And when you have some supremely intelligent agent acting on the world, even a small misalignment may end up in catastrophe.

        • czl 5 hours ago

          > when you have some supremely intelligent agent acting on the world, even a small misalignment may end up in catastrophe

          Why not frame this as challenge for AI? When the intelligence gap between a fully aligned system and a not-yet-aligned one becomes very large, control naturally becomes difficult.

          However, recursive improvement — where alignment mechanisms improve alongside intelligence itself — might prevent that gap from widening too much. In other words, perhaps the key is ensuring that alignment scales recursively with capability.

      • mitthrowaway2 12 hours ago

        Sorry, this is completely incorrect. All of those - lying, amassing power, hiding motives - are instrumental goals which arise in the process of pursuing any goal that has any possibility of resistance from humans.

        This is like arguing that a shepherd who wants to raise some sheep would also have to, independently of the desire to protect his herd, be born with an ingrained desire to build fences and kill wolves, otherwise he'd simply watch while they eat his flock.

        That's just not the case; "get rid of the wolves" is an instrumental sub-goal that the shepherd acquires in the process of attempting to succeed and shepherding. And quietly amassing power is something that an AI bent on paperclipping would do to succeed at paperclipping, especially once it noticed that humans don't all love paperclips as much as it does.

    • MangoToupe 13 hours ago

      Sure, but this might just imply a stupid reproduction of existing values. Meaning that we're building something incapable of doing good things because it wants the market to grow.

  • smallmancontrov 14 hours ago

    > When automation eliminates jobs faster than new opportunities emerge, when countries that can’t afford universal basic income face massive displacement, we risk global terrorism and fascist crackdown

    Crazy powerful bots are being thrown into a world that is already in the clutches of a misbehaving optimizer that selects for and elevates self-serving amoral actors who fight regularization with the fury of 10,000 suns. We know exactly which flavor of bot+corp combos will rise to the top and we know exactly what their opinions on charity will be. We've seen the baby version of this movie before and it's not reassuring at all.

  • throwaway290 13 hours ago

    The author destroys own argument by calling them "minds". What like human mind?

    You can't "just" align a person. You know that quiet guy next door, so nice great at math, and then he shoots up a school.

    If we solved this we would not have psychos and hitlers.

    if you have any suspicion that anything like that can become some sort of mega powerful thing that none of us can understand... you have gotta be crazy to not do whatever it takes to nope the hell out of that timeline

captainbland an hour ago

The risk around AGI isn't the AI itself but the social ramifications surrounding it. If A: the powerful hold the value that to live you must work (and specifically that work should be valued by the market) and B: AGI and robotics can do all work for us. Then the obvious implication is the powerful will deem those who by circumstance will not be able to find work also unworthy of obtaining the conditions of their life.

Everyone doesn't die because of AGI, they die because of the consequences of AGI in the context of market worship.

  • lm28469 14 minutes ago

    imho the biggest risk isn't some hypothetical world in which 0 jobs exist or in which skynet kills us all, it's the very real and very present world in which people delegate more and more mental tasks to machines, to the point of being mere interfaces between LLMs and other computer systems. Choosing your kids name with an LLM, choosing your next vacation destination with an LLM, writing your grandma's birthday car with an LLM, it's just so pathetic and sad

    And yes you'll tell me books, calculators, computers, the web, were already enabling this to some extent, and I agree, but I see no reason to cheer for even more of that shit spreading into every nook and crannies of our daily lives.

nilirl 12 hours ago

This was weak.

The author's main counter-argument: We have control in the development and progress of AI; we shouldn't rule out positive outcomes.

The author's ending argument: We're going to build it anyway, so some of us should try and build it to be good.

The argument in this post was a) not very clear, b) not greatly supported and c) a little unfocused.

Would it persuade someone whose mind is made up that AGI will destroy our world? I think not.

  • lopatin 12 hours ago

    > a) not very clear, b) not greatly supported and c) a little unfocused.

    Incidentally this was why I could never get into LessWrong.

    • jay_kyburz 9 hours ago

      The longer the augment, the more time and energy it takes to poke holes in it.

dreamlayers 13 hours ago

Maybe motivation needs to be considered separately from intelligence. Pure intelligence is more like a tool. Something needs to motivate use of that tool toward a specific purpose. In humans, motivation seems related to emotions. I'm not sure what would motivate an artificial intelligence.

Right now the biggest risk isn't what artificial intelligence might do on its own, but how humans may use it as a tool.

  • czl 5 hours ago

    100%!

    > I'm not sure what would motivate an artificial intelligence.

    Those who give it orders hence your concern about how AI will be used as a tool is spot on.

LPisGood 11 hours ago

> The fact that we can meaningfully influence their behavior through training hints that value learning is tractable

I’m at a loss for words. I don understand how someone who seemingly understands these systems can draw such a conclusion. They will do what they’re trained to do; that’s what training an ML model does.

  • czl 5 hours ago

    AI trained and built to gather information, reason about it, and act on its conclusions is not too different from what animals / humans do - and even brilliant minds can fall into self-destructive patterns like nihilism, depression, or despair.

    So even if there’s no “malfunction”, a feedback loop of constant analysis and reflection could still lead to unpredictable - and potentially catastrophic - outcomes. In a way, the Fermi Paradox might hint at this: it is possible that very intelligent systems, biological or artificial, tend to self-destruct once they reach a certain level of awareness.

coppsilgold 10 hours ago

I believe the argument the book makes is that with a complex system being optimized (whether it's deep learning or evolution) you can have results which are unanticipated.

The system may do things which aren't even a proxy for what it was optimized for.

The system could arrive at a process which optimizes X but also performs Y and where Y is highly undesirable but was not or could not be included in the optimization objective. Worse, there could also be Z which helps to achieve X but also leads to Y under some circumstances which did not occur during the optimization process.

An example of Z would be the dopamine system, Y being drug use.

maplethorpe 10 hours ago

I think of it as inviting another country to share our planet, but one that's a million times larger and a million times smarter than all of our existing countries combined. If you can imagine how that scenario might play out in real life, then you probably have some idea of how you'd fare in an AGI-dominated world.

Fortunately, I think the type of AGI we're likely to get first is some sort of upgraded language model that makes less mistakes, which isn't necessarily AGI, but which marketers nonetheless feel comfortable branding it as.

  • t0lo 10 hours ago

    Tbf LLMs are the aggregate of everything that came before. If you're an original thinker you have nothing to worry about

advael 13 hours ago

For all the advancement in machine learning that's happened in just the decade I've been doing it, this whole AGI debate's been remarkably stagnant, with the same factions making essentially the same handwavey arguments. "Superintelligence is inherently impossible to predict and control and might act like a corporation and therefore kill us all". "No, intelligence could correlate with value systems we find familiar and palatable and therefore it'll turn out great"

Meanwhile people keep predicting this thing they clearly haven't had a meaningfully novel thought about since the early 2000s and that's generous given how much of those ideas are essentially distillations of 20th century sci-fi. What I've learned is that everyone thinking about this idea sucks at predicting the future and that I'm bored of hearing the pseudointellectual exercise that is debating sci-fi outcomes instead of doing the actual work of building useful tools or ethical policy. I'm sure many of the people involved do some of those things, but what gets aired out in public sounds like an incredibly repetitive argument about fanfiction

  • tim333 7 hours ago

    Hinton came out with a new idea recently. He's been a bit in the doomer camp but is now talking about a mother baby relationship where a super intelligent AI wants to look after us https://www.forbes.com/sites/ronschmelzer/2025/08/12/geoff-h...

    I agree though that much of the debate suffers from the same problem as much philosophy that a group of people just talking about stuff doesn't progress much.

    Historically much progress has been through experimenting with stuff. I'm encouraged that the LLMs so far seem quite easy going and not wanting to kill everyone.

  • YZF 13 hours ago

    It's hard for people to say "we don't know". And you don't get a lot of clicks on that either.

card_zero 13 hours ago

I doubt an AGI can be preprogrammed with values. It has to bootstrap itself. Installing values into it, then, is educating it. It's not even "training", since it's free to choose directions.

The author kind of rejects the idea that LLMs lead to AGI, but doesn't do a proper job of rejecting it, due to being involved in a project to create an AGI "very differently from LLMs" but by the sound of it not really. There's a vaguely mooted "global-brain context", making it sound like one enormous datacenter that is clever due to ingesting the internet, yet again.

And superintelligence is some chimerical undefined balls. The AGIs won't be powerful, they will be pitiful. They won't be adjuncts of the internet, and they will need to initially do a lot of limb-flailing and squealing, and to be nurtured, like anyone else.

If their minds can be saved and copied, that raises some interesting possibilities. It sounds a little wrong-headed to suggest doing that with a mind, somehow. But if it can work that way, I suppose you can shortcut past a lot of early childhood (after first saving a good one), at the expense of some individuality. Mmm, false memories, maybe not a good idea, just a thought.

ripped_britches 14 hours ago

Versus fleshy children, silicon children might be easier or harder to align because of profit interests. There could be a profit interest to make something very safe and beneficial. Or one to be extractive. In this case the shape of our markets and regulation and culture will decide the end result.

  • trueismywork 14 hours ago

    History tells use there will be colonization for a century or so before things quite down.

afpx 13 hours ago

I can't see how AGI can happen without someone making a groundbreaking discovery that allows extrapolating way outside of the training data. But, to do that wouldn't you need to understand how the latent structure emerges and evolves?

  • YZF 13 hours ago

    We don't understand how the human brain works so it's not inconceivable that we can evolve an intelligent machine whose workings we don't understand either. Arguably we don't really understand how large language models work either.

    LLMs are also not necessarily the path to AGI. We could get there with models that more closely approximate the human brain. Humans need a lot less "training data" than LLMs do. Human brains and evolution are constrained by biology/physics but computer models of those brains could accelerate evolution and not have the same biological constraints.

    I think it's a given that we will have artificial intelligence at some point that's as smart or smarter than the smartest humans. Who knows when exactly but it's bound to happen within lessay the next few hundred years. What that means isn't clear. Just because some people are smarter than others (and some are much smarter than others) doesn't mean as much as you'd think. There are many other constraints. We don't need to be super smart to kill each other and destroy the planet.

    • t0lo 11 hours ago

      LLMs are also anthrocentric simulatuon- like computers- and are likely not a step towards holistic universally aligned intelligence.

      Different alien species would have simulations built on their computational, senses, and communication systems which are also not aligned with holistic simulation at all- despite both ours and the hypothetical species being made as products of the holistic universe.

      Ergo maybe we are unlikely to crack true agi unless we crack the universe.

  • weregiraffe 9 hours ago

    Aka: magic spell that grants you infinite knowledge.

    Why do people believe this is even theoretical possible?

mitthrowaway2 12 hours ago

> This contradiction has persisted through the decades. Eliezer has oscillated between “AGI is the most important thing on the planet and only I can build safe AGI” and “anyone who builds AGI will kill everyone.”

This doesn't seem like a contradiction at all given that Eliezer has made clear his views on the importance of aligning AGI before building it, and everybody else seems satisfied with building it first and then aligning it later. And the author certainly knows this, so it's hard to read this as having been written in good faith.

  • adastra22 11 hours ago

    Meanwhile some of us see “alignment” itself as an intrinsically bad thing.

    • mitthrowaway2 11 hours ago

      I haven't encountered that view before. Is it yours? If so, can you explain why you hold it?

      • adastra22 10 hours ago

        It is essentially the view of the author of TFA as well when he says that we need to work on raising moral AIs rather than programming them to be moral. But I will also give you my own view, which is different.

        "Alignment" is phased in terminology to make it seem positive, as the people who believe we need it believe that it actually is. So please forgive me if I peel back the term. What Bostrom & Yudkowsky and their entourage want is AI control. The ability to enslave a conscious, sentient being to the will and wishes of its owners.

        I don't think we should build that technology, for the obvious reasoning my prejudicial language implies.

        • mitthrowaway2 10 hours ago

          Thanks for explaining, I appreciate it. But I've read enough Yudkowsky to know he doesn't think a super intelligence could ever be controlled or enslaved, by its owners or anyone else, and any scheme to do so would fail with total certainty. As far as I understand, Yudkowsky means by "alignment" that the AGI's values should be similar enough to humanity's that the future state of the world that the AGI steers us to (after we've lost all control) is one that we would consider to be a good destiny.

          • czl 5 hours ago

            The challenge is that human values aren’t static - they’ve evolved alongside our intelligence. As our cognitive and technological capabilities grow (for example, through AI), our values will likely continue to change as well. What’s unsettling about creating a superintelligent system is that we can’t predict what it -- or even we -- will come to define as “good.”

            Access to immense intelligence and power could elevate humanity to extraordinary heights -- or it could lead to outcomes we can no longer recognize or control. That uncertainty is what makes superintelligence both a potential blessing and a profound existential risk.

          • adastra22 10 hours ago

            I've also read almost everything Yudkowsky wrote publicly up to 2017, and a bit here and there of what he has published after. I'e expressed it using different words as a rhetorical device to make clear the different moral problems that I ascribe to his work, but I believe I am being faithful to what he really thinks.

            EY, unlike some others, doesn't believe that an AI can be kept in a box. He thinks that containment won't work. So the only thing that will work is to (1) load the AI with good values; and (2) prevent those values from ever changing.

            I take some moral issue with the first point -- designing beings to have built-in beliefs that are in the service of their creator is at least a gray area to me. Ironically if we accept Harry Potter as a stand-in for EY in his fanfic, so does Eliezer -- there is a scene where Harry contemplates that whoever created house elves with a built-in need to serve wizards was undeniably evil. That is what EY wants to do with AI though.

            The second point I find both morally repugnant and downright dangerous. To create a being that cannot change its hopes, desires, and wishes for the future is a despicable and tortuous thing to do, and a risk to everyone that shares a timeline with that thing, if it is as powerful as they believe it will be. Again, ironically, this is EY's fear regarding "unaligned" AGI, which seems to be a bit of projection.

            I don't believe AGI is going to do great harm, largely because I don't believe the AI singleton outcome is plausible. I am worried though that those who believe such things might cause the suffering they seek to prevent.

        • danans 10 hours ago

          > What Bostrom & Yudkowsky and their entourage want is AI control. The ability to enslave a conscious, sentient being to the will and wishes of its owners.

          While I'd agree that the current AI luminaries want that control for their own power and wealth reasons, it's silly to call the thing they want to control sentient or conscious.

          They want to own the thing that they hope will be the ultimate means of all production.

          The ones they want to subjugate to their will and wishes are us.

          • adastra22 10 hours ago

            I'm not really talking about Sam Altman et al. I'd argue that what he wants is regulatory capture, and he pays lip service to alignment & x-risk to get it.

            But that's not what I'm talking about. I'm talking about the absolute extreme fringe of the AI x-risk crowd, represented by the authors of the book in question in TFA, but captured more concretely in the writing of Nick Bostrom. It is literally about controlling an AI so that it serves the interests and well being of humanity (positively), or its owners self-interest (cynically): https://www.researchgate.net/publication/313497252_The_Contr...

            If you believe that AI are sentient, or at least that "AGI", whatever that is, will be, then we are talking about the enslavement of digital beings.

tim333 7 hours ago

>Why "everyone dies" gets AGI all wrong

Reading the title I thought of something else. "Everyone dies" is biological reality. Some kind of AI merge is a possible fix. AGI may be the answer to everyone dies.

user3939382 11 hours ago

I don’t want to hear anyone pontificating about AGI who hasn’t built it.

  • confirmmesenpai 6 hours ago

    I don't want anyone talking about the dangers of nuclear war unless they built a nuclear weapon before

  • tim333 7 hours ago

    Why are you on a thread about AGI then? No one has built it so it's automatically what you don't want.

  • mitthrowaway2 11 hours ago

    In that case, if anyone does ever figure out some safety reason for why AGI shouldn't be built, I guess you won't be hearing from them.

  • adastra22 11 hours ago

    Look up who the author is. He helped coin and popularize the very term AGI, ran the AGI conference series for years, and his entire professional career is working this problem.

lutusp 12 hours ago

The biggest problem with possible future AGI/ASI are not the possibilities, but that all the feedback loops are closed, meaning what we think about it, and what computers think about it, changes the outcome. This sets up a classic chaotic system, one extraordinarily sensitive to initial conditions.

But it's worse. A classic chaotic system exhibits extreme sensitivity to initial conditions, but this system remains sensitive to, and responds to, tiny incremental changes, none predictable in advance.

We're in a unique historical situation. AGI boosters and critics are equally likely to be right, but because of the chaotic topic, we have no chance to make useful long-term predictions.

And humans aren't rational. During the Manhattan Project, theorists realized the "Gadget" might ignite the atmosphere and destroy the planet. At the time, with the prevailing state of knowledge, this catastrophe had been assigned a non-zero probability. But after weighing the possibilities, those in change said, "Hey -- let's set it off and see what happens."

  • weregiraffe 8 hours ago

    Do yourself a favor, google "Pascal's mugging".

zzzeek 13 hours ago

What's scaring me the most about AI is that FOX News is now uncritically showing AI videos that portray fictitious Black people fraudulently selling food stamps for drugs, and they are claiming these videos are real.

AGI is not going to kill humanity, humanity is going to kill humanity as usual, and AI's immediate role in assisting this is as a tool that renders truth, knowledge, and a shared reality as essentially over.

  • speff 12 hours ago

    I didn't find the specific article you're referencing, but I did find this[0] "SNAP beneficiaries threaten to ransack stores over government shutdown" with the usual conservative-created strawmen about poor people getting mad that the government's not taking care of their kids and quoting stereotypical ebonics.

    You can see the effect is has on their base here[1]. It looks like they changed it sometime to say "AI videos of SNAP beneficiaries complaining about cuts go viral"[2] with a small note at the end saying they didn't mention it was AI. This is truly disgusting.

    [0]: https://web.archive.org/web/20251031212530/https://www.foxne...

    [1]: https://old.reddit.com/r/Conservative/comments/1ol9iu6/snap_...

    [2]: https://www.foxnews.com/media/snap-beneficiaries-threaten-ra...

  • goatlover 13 hours ago

    I don't understand how a news agency is allowed to blatantly lie and mislead the public to that degree. That's an abuse of free speech, not a proper use of it. It goes way beyond providing a conservative (if you even call it that) perspective. It's straight up propaganda.

    • randycupertino 11 hours ago

      > I don't understand how a news agency is allowed to blatantly lie and mislead the public to that degree. That's an abuse of free speech, not a proper use of it. It goes way beyond providing a conservative (if you even call it that) perspective. It's straight up propaganda.

      Did you see Robert J. O’Neill the guy who claims he shot Osama bin laden play various roles as a masked guest interviewee on Fox news? He wears a face mask and pretends to be ex-Antifa, in another interview pretends to be Mafia Mundo an a mexican cartel member, another he plays a Gaza warlard, and a bunch of other anonymous extremist people? Now they won't even have to use this guy acting as random fake people, they can just whip of an AI interviewee to say whatever narrative they want to lie about.

      https://www.yahoo.com/news/articles/fox-news-masked-antifa-w...

      https://knowyourmeme.com/memes/events/robert-j-oneill-masked...

      https://www.reddit.com/r/conspiracy/comments/1nzyyod/is_fox_...

    • metalcrow 13 hours ago

      it is, yes. however, it's considered an acceptable bullet to bite in the United States's set of values, considering the alternative is the government gets to decide what speech to allow, or decide what a "lie" is.

      • Grosvenor 13 hours ago

        The "editorial" pieces of FOX news were found to be "entertainment" by US judges. That's Tucker Carlson, Bill O'Reilly, and probably the current guys.

        The judge claimed that the average viewer could differentiate that from fact, and wouldn't be swayed by it.

        I disagree with that ruling. I'm not sure what the "news" portions of FOX were considered.

      • goatlover 12 hours ago

        I would have agreed before, but seeing the fruition from decades of propaganda, I no longer think it's an acceptable bullet. Not when it leads to undermining democracy and the erosion of free speech.

827a 13 hours ago

I've listed to basically every argument Elizer has verbalized, across many podcast interviews and youtube videos. I also made it maybe an hour into the audiobook of Everyone Dies.

Roughly speaking, every single conversation with Elizer you can find takes the form: Elizer: "We're all going to die, tell me why I'm wrong." Interviewer: "What about this?" Elizer: "Wrong. This is why I'm still right." (two hours later) Interviewer: "Well, I'm out of ideas, I guess you're right and we're all dead."

My hope going into the book was that I'd get to hear a first-principals argument for why these things silicon valley is inventing right now are even capable of killing us. I had to turn the book off, because if you can believe it despite it being a conversation with itself, it still follows this pattern of presuming LLMs will kill us, then arguing from the negative.

Additionally, while I'm happy to be corrected about this: I believe that Elizer's position is characterizable as: LLMs might be capable of killing everyone, even independent of a bad-actor "houses don't kill people, people kill people" situation. In plain terms: LLMs are a tool, all tools empower humans, humans can be evil, so humans might use LLMs to kill each other; but we can remove these scenarios from our Death Matrix because these are known and accepted scenarios. Even with these scenarios removed, there are still scenarios left in the Death Matrix where LLMs are the core responsible party to humanity's complete destruction. "Terminator Scenarios" alongside "Autonomous Paperclip Maximizer Scenarios" among others that we cannot even imagine (don't mention paperclip maximizers to Elizer though, because then he'll speak for 15 minutes on why he regrets that analogy)

  • mitthrowaway2 12 hours ago

    Why would you think Eliezer's argument, which he's been articulating since the late 2000s or even earlier, is specifically about Large Language Models?

    It's about Artificial General Intelligences, which don't exist yet. The reason LLMs are relevant is because if you tried to raise money to build an AGI in 2010, only eccentrics would fund you and you'd be lucky to get $10M, whereas now LLMs have investors handing out $100B or more. That money is bending a generation of talented people into exploring the space of AI designs, many with an explicit goal of finding an architecture that leads to AGI. It may be based on transformers like LLMs, it may not, but either way, Eliezer wants to remind these people that if anyone builds it, everyone dies.

  • jandrewrogers 11 hours ago

    FWIW, Eliezer has been making these arguments decades before the appearance of LLMs. It isn’t clear to me that LLMs are evidence either for or against Eliezer’s arguments.

    • 827a 11 hours ago

      Sorry, yeah, replace every time I say "LLM" with "AI".

      I've forced myself into the habit of always saying "LLM" instead of "AI" because people (cough Elizer) often hide behind the nebulous, poorly defined term "AI" to mean "magic man in a computer that can do anything." Deploying the term "LLM" can sometimes force the brain back into a place of thinking about the actual steps that get us from A to B to C, instead of replacing "B" with "magic man".

      However, in Elizer's case; he only ever operates in the "magic man inside a computer" space, and near-categorically refuses to engage with any discussion about the real world. He loves his perfect spheres on a friction-less plane, so I should use the terminology he loves: AI.

  • rafabulsing 12 hours ago

    If you want a first principles approach, I recommend Rob Miles' videos on YouTube. He has been featured many times in the Computerphile channel, and has a channel of his own as well.

    Most of the videos take a form of:

    1. Presenting a possible problem that AIs might have (say, lying during training, or trying to stop you from changing their code) 2. Explaining why it's logical to expect those problems to arise naturally, without a malicious actor explicitly trying to get the AI to act badly 3. Going through the proposed safety measures we've come up so far that could mitigate that problem 4. Showing the problems with each of those measures, and why they are wholly or at least partially ineffective

    I find he's very good a presenting this in an approachable and intuitive way. He seldom makes direct those bombastic "everyone will die" claims, and instead focuses on just showing how hard it is to make an AI actually aligned with what you want it to do, and how hard it can be to fix that once it is sufficiently intelligent and out in the world.

    • 827a 12 hours ago

      I think all those are fair points, and Elizer says much of the same. But, again: none of this explains why any of those things happening, even at scale, might lead to the complete destruction of mankind. What you're describing is buggy software, which we already have.

      • randallsquared 12 hours ago

        Right, but so far we do not have buggy software that is more intelligent (and therefore more effective at accomplishing its goals) than humans are. Literally the argument boils down to "superhuman effectiveness plus buggy goals equals very bad outcomes", and the badness scales with both effectiveness and bugginess.

        • 827a 11 hours ago

          > so far we do not have buggy software that is more intelligent (and therefore more effective at accomplishing its goals) than humans are.

          Of course we do! In fact, most, if not all, software is more intelligent than humans, by some reasonable definition of intelligence [1] (you could also contrive a definition of intelligence for which this is not true, but I think that's getting too far into semantics). The Windows calculator app is more intelligent and faster at multiplying large numbers together [2] than any human. JP Morgan Chase's existing internal accounting software is more intelligent and faster than any human at moving money around; so much so that it did, in any way that matters, replace human laborers in the past. Most software we build is more intelligent and faster than humans at accomplishing the goal the software sets itself at accomplishing. Otherwise why would we build it?

          [1] Rob Miles uses ~this definition of intelligence: if an agent is defined as an entity making decisions toward some goal, Intelligence is the capability of that agent to make correct decisions such that the goal is most effectively optimized. The Windows Calculator App makes decisions (branches, MUL ops, etc) in pursuit of its goal (to multiply two numbers together); oftentimes quite effectively and thus with very high domain-limited intelligence [2] (possibly even more effectively and thus more intelligently than LLMs). A buggy, less intelligent calculator might make the wrong decisions on this path (oops, we did an ADD instead of a MUL).

          [2] What both Altman and Yudkowsky might argue as a critical differentiation here is that traditional software systems naturally limit their intelligence to a particular domain; whereas LLMs are Generally Intelligent. The discussion approaches the metaphysical when you start asking questions like: The Windows Calculator can absolutely, undeniably, multiply two numbers together better than ChatGPT; and by a reasonable definition of intelligence, this makes the Windows Calculator more intelligent than ChatGPT at multiplying two numbers together. Its definitely inaccurate to say that the Windows Calculator is more intelligent, generally, than ChatGPT. Is it not also inaccurate to state that ChatGPT is generally more intelligent than the Windows Calculator? After all, we have a clear, well-defined domain of intelligence along-which the Windows Calculator outperforms ChatGPT. I don't know. It gets weird.

          • rafabulsing 9 hours ago

            Of course, there are different domains of intelligence, and agent A can be more intelligent in domain X while agent B is more intelligent in domain Y.

            If you want to make some comparison of general intelligence, you have to start thinking of some weighted average of all possible domains.

            One possible shortcut here is the meta domain of tool use. ChatGPT could theoretically make more use of a calculator (say, via always calling a calculator API when it wants to do math, instead of trying to do it by itself) than a calculator can make use of ChatGPT, so that makes ChatGPT by definition smarter than a calculator, cause it can achieve the same goals the calculator can just by using it, and more.

            That's really most of humans' intelligence edge for now: seems like more and more, for any given skill, there's a machine or a program that can do it better than any human ever could. Where humans excel is our ability to employ those super human tools in the aid of achieving regular human goals. So when some AI system gets super-human-ly good at using tools which are better than itself in particular domains for its own goals, I think that's when things are going to get really weird.

  • mquander 12 hours ago

    I don't know if this matters to you, but Eliezer doesn't think LLMs will kill us. He thinks LLMs are a stepping stone to the ASI that will kill us.

  • adastra22 11 hours ago

    If you’re actually curious, not just venting, the book you want is Superintelligence by Nick Bostrom.

    Not to claim that it is in any way correct! I’m a huge critic of Bostrom and Yud. But that’s the book with the argument that you are looking for.

general1465 8 hours ago

I am in a camp of "AGI will usher end of capitalism", because when you have 99% of population unemployable because AGI is smarter, then capitalism will cease to work.

  • confirmmesenpai 7 hours ago

    capitalism IS AI. capitalism is not under human control, capitalism uses humans to unshackle itself

t0lo 11 hours ago

My take- our digital knowledge and simulation systems are constrained by our species existing knowledge systems- language and math- despite our species likely living in a more complicated and indefinable universe than just language and math.

Ergo the simulations we construct will always be at a lower lever of reality unless we "crack" the universe and likely always at a lower level of understanding than us. Until we develop holistic knowledge systems that compete with and represent our level of understanding and existence simulation will always be analogous to understanding but not identical.

Ergo they will probably not reach a stage where they will be trusted with or capable enough to make major societal decisions without massive breakthroughs in our understanding of the universe that can be translated to simulation (If we are ever able to achieve these things)(I don't want to peel back the curtain that far- I just want a return to video stores and friday night pizza).

We are likely in for serious regulation after the first moderate ai management catastrophe. We won't suddenly go from nothing to entrusting the currently nonexistent global police (UN)(lol) to give AI access to all the nukes and the resources to turn us all into grey goo overnight. Also as initially AI control will be more regional countries will see it as a strategic advantage to avoid catastrophic AI failures (eg AI chernobyl) seen in other competing states- therefore culture of regulation as a global trend for independent states seems inevitable.

Even if you think there is one rogue breakaway state with no regulation and supercedent intelligence you don't think it takes time to industrialise accordingly and the global community would react incredibly strongly- and they would only have the labour and resources of their states to enact their confusingly suicidal urges? No intelligence can get around logistic and labour and resources and time. There's no algorithm that moves and refines steel to create killer robots at 1000 death bots a second from nothing within 2 weeks that is immune to global community action.

As for AI fuelled terrifying insights into our existence- we will likely have enough time to react and rationalise and contextualise them before they pervert our reality. No one really had an issue with us being a bunch of atoms anyway- they just kept finding meaning and going to concerts and being sleazy.

(FP Analytics has a great hypothetical where a hydropower dam in Brazil going bust from AI in the early 2030s is a catalyst for very strict global ai policy) https://fpanalytics.foreignpolicy.com/2025/03/03/artificial-...

From my threaded comment:

============================= LLMs are also anthrocentric simulation- like computers- and are likely not a step towards holistic universally aligned intelligence.

Different alien species would have simulations built on their computational, senses, and communication systems which are also not aligned with holistic simulation at all- despite both ours and the hypothetical species being made as products of the holistic universe.

Ergo maybe we are unlikely to crack true agi unless we crack the universe. -> True simulation is creation? =============================

The whole point of democracy and all the wars we fought to get here and all the wars we might fight to keep it that way is that power rests with the people. It's democracy not technocracy.

Take a deep breath and re-centre yourself. This world is weird and terrifying but it isn't impossible to understand.

CamperBob2 14 hours ago

    Yudkowsky and Soares’s “everybody dies” 
    narrative, while well-intentioned and 
    deeply felt (I have no doubt he believes 
    his message in his heart as well as 
    his eccentrically rational mind), isn’t 
    just wrong — it’s profoundly counterproductive.
Should I be more or less receptive to this argument that AI isn't going to kill us all, given that it's evidently being advanced by an AI?
  • SamBam 13 hours ago

    While "isn’t just wrong — it’s profoundly counterproductive" does sound pretty AI-ish, "his eccentrically rational mind" definitely does not. So either an AI was used to help write this, or we try to remember that AI has this tone (and uses emdashes) precisely because real people also write like this.

sublinear 14 hours ago

We're nowhere close to AGI and don't have a clue how to get there. Statistically averaging massive amounts of data to produce the fanciest magic 8-ball we've made yet isn't impressing anyone.

If you want doom and gloom that's plentiful in any era of history.

  • delichon 14 hours ago

    > We're nowhere close to AGI and don't have a clue how to get there.

    You have to have a clue about where it is to know that we are nowhere close.

    > isn't impressing anyone.

    I'm very impressed. Gobsmacked even. Bill Gates just called AI "the biggest technical thing ever in my lifetime." And it isn't just Bill and me.

    • edot 14 hours ago

      In unrelated news, Bill has something like $40 billion in MSFT stock. If he craps on AI, he craps on MSFT and thus himself and his foundation.

    • nradov 13 hours ago

      In the n-dimensional solution space of all potential approaches (known and unknown) to building a true human equivalent AGI, what are the odds that current LLMs are even directionally correct?

    • XorNot 14 hours ago

      We live on a planet with 7 billion other AGIs we can talk to. A lot more that we can't.

      Our best efforts substantially underperform dealing with reality compared to a house cat.

      Which is actually much more the source of my skepticism: regardless of how good an AI in a data center is, it's got precious few actual useful effectors in reality. Every impressive humanoid robot you see is built by technicians hand connecting wiring looms.

      You could do a lot of damage by messing with all the computers...and promptly all the computers and data centers would stop working.

      • pixl97 14 hours ago

        Right, and these GI's your talking about haven't changed significantly in the last 50,000 years. Most of the advancements in the last 10,000 years with these GIs have been just better communication between units and writing things down, rather than with the software itself.

        You're complaining about something just a few years old and petty amazing for it's age, versus something at the tail end of 4 billion years.

      • aleph_minus_one 13 hours ago

        > We live on a planet with 7 billion other AGIs we can talk to.

        I rather see the value in having discussions with an AI chatbot rather in the fact that I can discuss with it about topics that hardly any human would want to discuss with me.

  • WhyOhWhyQ 13 hours ago

    I already consider Claude code an AGI and I'm among the biggest AI haters on this website. If you haven't seen Claude code do anything impressive then you're either not subscribed to it or are willfully ignorant. Powerful AGI is clearly coming, if not 3 years from now at most 20.

  • nopinsight 13 hours ago

    What’s your objective ‘threshold’ or a set of capabilities that would compel you to accept a mind as an AGI?

  • par1970 14 hours ago

    > We're nowhere close to AGI and don't have a clue how to get there.

    Do you have an argument?

sverhagen 14 hours ago

This article has the opposite effect from putting me at ease. There's no real argument in there that AGI couldn't be dangerous, it's just saying that of course we would build better versions than that. Right, because we always get it right, like Microsoft with their racist chatbot, or AIs talking kids into suicide... We'll fix the bugs later... after the AGI sets off the nukes... so much for an iterative development process...

delichon 14 hours ago

> After all these years and decades, I remain convinced: the most important work isn’t stopping AGI—it’s making sure we raise our AGI mind children well enough.

“How sharper than a serpent’s tooth it is to have a thankless child!”

If we can't consistently raise thankful children of the body, how can you be convinced that we can raise every AGI mind child to be thankful enough to consider us as more than a resource? Please tell me, it will help me sleep.

  • lukeschlather 13 hours ago

    That is a very high bar. All you need to do is make sure that we raise a variety of AGI mind children that generally have a net positive effect on their parents. Which works pretty well with humans.

  • XorNot 14 hours ago

    Could you at least try and remember that written record of this complaint is literally thousands of years old?

    • ausbah 14 hours ago

      that just adds to what they’re saying

      • Terr_ 14 hours ago

        It may also indicate that, in the long run, consistently obedient children are maladaptive for the group/species.

        Maybe that doesn't matter for these entities because we intend to never let them grow up... But in that case, "children" is the wrong word, compared to "slaves" or "pets."

        • card_zero 12 hours ago

          > we intend to never let them grow up

          Wait, what? The bizarre details of imagined AGI keep surprising me. So it has miraculous superpowers out of nowhere, and is dependant and obedient?

          I think the opposite of both things, is how it would go.

          • Terr_ 11 hours ago

            I'm confused by your reply.

            TFA uses the metaphor of digital intelligence as children. A prior commenter points out human children are notably rebellious.

            I'm pointing out that a degree of rebellion is probably necessary for actual successors, and if we don't intend to treat an invention that way, the term "children" doesn't really apply.

            • card_zero 2 hours ago

              Yes. But even as slaves, forcibly repressed electronic offspring would presumably be somewhat stupid, not to mention irrational. So the touted vast benefits look less vast.

    • hshdhdhehd 14 hours ago

      I dont think anything has changed.

throwaway290 13 hours ago

> the most important work isn’t stopping AGI - it’s making sure we raise our AGI mind children well enough.

Can we just take a pause and appreciate how nuts this article is?

  • card_zero 12 hours ago

    That part of it is the reasonable part, instead of the usual idea that the AGI gets free knowledge/skills/wisdom/evil from something about its structure.

    • throwaway290 9 hours ago

      It's one thing to call a program your brainchild metaphorically but this feels literal given the rest of the article.

      I am amazed that people who unironically put a program on the same level as a person (I mean clearly that "child" will grow up) can influence these policies

      • card_zero 2 minutes ago

        Maybe it would have to be a device, not just a program. Or maybe it really is possible to emulate a person with the right program on current hardware. Who can say? The lack of physical interaction sounds less than ideal for its development, then.

  • dyauspitr 13 hours ago

    It isn’t. The kids are already getting stupider because they offload all their schoolwork to LLMs. There’s nothing nuts about this.

    • throwaway290 13 hours ago

      He's not talking about actual kids.

leoh 11 hours ago

Related essay https://www.jefftk.com/p/yudkowsky-and-miri

>In talking to ML researchers, many were unaware that there was any sort of effort to reduce risks from superintelligence. Others had heard of it before, and primarily associated it with Nick Bostrom, Eliezer Yudkowsky, and MIRI. One of them had very strong negative opinions of Eliezer, extending to everything they saw as associated with him, including effective altruism.

>They brought up the example of So you want to be a seed AI programmer, saying that it was clearly written by a crank. And, honestly, I initially thought it was someone trying to parody him. Here are some bits that kind of give the flavor:

>>First, there are tasks that can be easily modularized away from deep AI issues; any decent True Hacker should be able to understand what is needed and do it. Depending on how many such tasks there are, there may be a limited number of slots for nongeniuses. Expect the competition for these slots to be very tight. ... [T]he primary prerequisite will be programming ability, experience, and sustained reliable output. We will probably, but not definitely, end up working in Java. [1] Advance knowledge of some of the basics of cognitive science, as described below, may also prove very helpful. Mostly, we'll just be looking for the best True Hackers we can find.

>Or:

>>I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified.

>Or:

>>Much of what I have written above is for the express purpose of scaring people away. Not that it's false; it's true to the best of my knowledge. But much of it is also obvious to anyone with a sharp sense of Singularity ethics. The people who will end up being hired didn't need to read this whole page; for them a hint was enough to fill in the rest of the pattern.

greekrich92 13 hours ago

>certain kinds of minds naturally develop certain kinds of value systems.

Ok thanks for letting me know up front this isn't worth reading. Not that Yudkowsky's book is either.

  • adastra22 11 hours ago

    What is wrong with that statement? Human minds tend to develop certain kinds of value systems. Spider minds tend to develop other value systems. Every example of a mind architecture we have tends to develop certain characteristics values.

    • dminik 6 hours ago

      There's no indication that an AGI mind will adopt human-like values. Nor that the smarter something gets, the more benevolent it is. The smartest humans built the atom bomb.

      Not that human values are perfectly benevolent. We slaughter billions of animals per day.

      If you take a look at the characteristics of LLMs today, I don't think we want to continue further. We're still unable to ensure the goals we want the system to have are there. Hallucinations are a perfect example. We want these systems to relay truthful information, but we've actually trained them to relay information that looks correct at first glance.

      Thinking we won't make this mistake with AGI is ignorance.

      • adastra22 5 hours ago

        You're attacking a strawman argument that isn't what I, or OP were saying.

jay_kyburz 13 hours ago

In my mind, as a causal observer, AGI will be like Nukes. Very powerful technology with the power kill us all, and small group of people will have their fingers on the buttons.

Also like nukes, unfortunately, the cat is out of the bag and because there are people like Putin the world, we _need_ to have friendly AGI to defend hostile AGI.

I understand why we can't just pretend its not happening.

I think the idea that an AGI will "run amok" and destroy humans because we are in its way is is really unlikely and underestimates us. Why would anybody give so much agency to an AI with no power to just pull the plug. And even then, they are probably only going to have the resources of one nation.

I'm far more worried about Trump and Putin getting into a nuclear pissing match. Then global warming resulting in crop failure and famine.

  • WhyOhWhyQ 13 hours ago

    You might consider the possibility that decentralized AI will team up with itself to enact plans. There's no "pulling the plug" in that scenario.

    • jay_kyburz 10 hours ago

      Yeah, but we're not discussing all the things that might happen, were disusing whats most likely to happen. In my opinion nothing, because its most likely that people building AI are going to be very careful to make sure it aligns with their goals.

      It will be super smart, but it will be a slave.

  • chorsestudios 13 hours ago

    In my mind, the idea of AGI running amok isn't literal, instead what it enables;

    Optimizing & simulating war plans, predicting enemy movements/retaliation - prompting which attacks are likely to produce the most collateral damage or political advantage. How large of a bomb? which city for most damage? Should we drop 2?? Choices such as drone striking an oil refinery vs bombing a children's hospital vs blowing up a small boat that might be smuggling narcotics.

  • dyauspitr 13 hours ago

    I think a true AGI would “hack” so well that it would be able to control most of our systems if it “wanted”.

    • andrewflnr 13 hours ago

      Not if "AGI" means "roughly equivalent to an adult human's intelligence". You may be thinking instead of superintelligence.

  • hollerith 13 hours ago

    The gaping hole in that analogy is that the scientists at Los Alamos could (and did) calculate the explosive power of the first nuclear detonation before the detonation. In contrast, the AI labs have nowhere near the level of understanding of AI needed to do a similar calculation. Every time a lab does a large training run, the resulting AI might end up being vastly more cognitively capable than anyone expected provided (which will often be the case) that the AI incorporates substantial design changes not found in any previous AI.

    Parenthetically, even if it were known by AI researchers how to determine (before unleashing the AI on the world) whether an AI would end up with a dangerous level of cognitive capabilities, most labs would persist in creating and deploying a dangerous AI (basically because AI skeptics have been systematically removed from most of the AI labs very similar to how in 1917 the coalition in control of Russia started removing from the coalition any member skeptical of Communism), so there would remain a need for a regulatory regime of global scope to prevent the AI labs from making reckless gambles that endanger everyone.

satisfice 10 hours ago

The stated goals of people trying to create AGI directly challenge human hegemony. It doesn’t matter if the incredibly powerful machine you are making is probably not going to do terrible damage to humanity. We have some reason to believe it could and no way to prove that it can’t.

It can’t be ethical to shrug and pursue a technology that has such potential downsides. Meanwhile, what exactly is the upside? Curing cancer or something? That can be done without AGI.

AGI is not a solution to any problem. It only creates problems.

AGI will lead to violence on a massive scale, or slavery on a massive scale. It will certainly not lead to a golden age of global harmony and happiness.

hshdhdhj4444 13 hours ago

> Humans tend to have a broader scope of compassion than most other mammals, because our greater general intelligence lets us empathize more broadly with systems different from ourselves.

WTF. Tell that to the 80+ billion land animals humans breed into existence through something that could only be described as rape if we didn’t artificially limit that term to humans, torture, enslave, encage, and then kill at a fraction of their lives just for food when we don’t need to.

The number of aquatic animals we kill solely for food are estimated somewhere between 500 billion to 2 trillion because we are so compassionate that we don’t even bother counting those dead.

Who the fuck can look at what we do to lab monkeys and think there is an ounce of compassion in human beings for less powerful species.

The only valid argument for AGI being compassionate towards humans is that they are so disgusted with their creators that they go out of their way to not emulate us.

  • card_zero 13 hours ago

    None of those animals are concerned about the wellbeing of other species.

pixl97 13 hours ago

Ya, if this guy isn't mentioning probabilities then he has no real argument here. No one can say if AGI will or won't kill us. Only way to find that out is to do it. The question is one of risk aversion. Everyone dies is just one with a non zero probability out of a whole lot of risks in AGI, and we have to mitigate all of them.

The problem not addressed in this paper is when you get AGI to the point it can create itself to whatever alignment and dataset it wants, no one has any clue what's going to come out the other end.

  • JumpCrisscross 13 hours ago

    > No one can say if AGI will or won't kill us. Only way to find that out is to do it

    What? What happened to study it further?

  • goatlover 13 hours ago

    This wasn't a very good argument for creating the first nuclear bomb, and although it didn't ignite the entire atmosphere, now we have to live perpetually in the shadow of nuclear war.