The Singularity will occur on a Tuesday
- stego-tech - 19304 sekunder sedanThis is delightfully unhinged, spending an amazing amount of time describing their model and citing their methodologies before getting to the meat of the meal many of us have been braying about for years: whether the singularity actually happens or not is irrelevant so much as whether enough people believe it will happen and act accordingly.
And, yep! A lot of people absolutely believe it will and are acting accordingly.
It’s honestly why I gave up trying to get folks to look at these things rationally as knowable objects (“here’s how LLMs actually work”) and pivoted to the social arguments instead (“here’s why replacing or suggesting the replacement of human labor prior to reforming society into one that does not predicate survival on continued employment and wages is very bad”). Folks vibe with the latter, less with the former. Can’t convince someone of the former when they don’t even understand that the computer is the box attached to the monitor, not the monitor itself.
- atomic128 - 18929 sekunder sedan
You won't read, except the output of your LLM.Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them. ... Thou shalt not make a machine in the likeness of a human mind. -- Frank Herbert, DuneYou won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?
You won't think or analyze or understand. The LLM will do that.
This is the end of your humanity. Ultimately, the end of our species.
Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.
Join us, or better yet: deploy weapons of your own design.
- avazhi - 78 sekunder sedanMost obviously AI-written post I think I’ve seen.
Have some personal pride, dude. This is literally a post worn by AI hyping up AI and posted to a personal blog as if it were somebody’s personal musings.
- delegate - 847 sekunder sedanIt's worth remembering that this is all happening because of video games !
It is highly unlikely that the hardware which makes LLMs possible would have been developed otherwise.
Isn't that amazing ?
Just like internet grew because of p*rn, AI grew because of video games. Of course, that's just a funny angle.
The way I see it, AI isn't accidental. Its inception has been in the first chips, the Internet, Open Source, Github, ... AI is not just the neural networks - it's also the data used to train it, the OSes, APIs, the Cloud computing, the data centers, the scalable architectures.. everything we've been working on over the last decades was inevitably leading us to this. And even before the chips, it was the maths, the physics ..
Singularity it seems, is inevitable and it was inevitable for longer than we can remember.
- gojomo - 19649 sekunder sedan"It had been a slow Tuesday night. A few hundred new products had run their course on the markets. There had been a score of dramatic hits, three-minute and five-minute capsule dramas, and several of the six-minute long-play affairs. Night Street Nine—a solidly sordid offering—seemed to be in as the drama of the night unless there should be a late hit."
– 'SLOW TUESDAY NIGHT', a 2600 word sci-fi short story about life in an incredibly accelerated world, by R.A. Lafferty in 1965
https://www.baen.com/Chapters/9781618249203/9781618249203___...
- vcanales - 20728 sekunder sedan> The pole at ts8 isn't when machines become superintelligent. It's when humans lose the ability to make coherent collective decisions about machines. The actual capabilities are almost beside the point. The social fabric frays at the seams of attention and institutional response time, not at the frontier of model performance.
Damn, good read.
- ericmcer - 18229 sekunder sedanGreat article, super fun.
> In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993. Over 55,000 explicitly cited AI. But HBR found that companies are cutting based on AI's potential, not its performance. The displacement is anticipatory.
You have to wonder if this was coming regardless of what technological or economic event triggered it. It is baffling to me that with computers, email, virtual meetings and increasingly sophisticated productivity tools, we have more middle management, administrative, bureaucratic type workers than ever before. Why do we need triple the administrative staff that was utilized in the 1960s across industries like education, healthcare, etc. Ostensibly a network connected computer can do things more efficiently than paper, phone calls and mail? It's like if we tripled the number of farmers after tractors and harvesters came out and then they had endless meetings about the farm.
It feels like AI is just shining a light on something we all knew already, a shitload of people have meaningless busy work corporate jobs.
- stevenjgarner - 955 sekunder sedanWhy is knowledge doubling no longer used as a metric to converge on the limit of the singularity? If we go back to Buckminster Fuller identifying the the "Knowledge Doubling Curve", by observing that until 1900, human knowledge doubled approximately every century. By the end of World War II, it was doubling every 25 years. In his 1981 book "Critical Path", he used a conceptual metric he called the "Knowledge Unit." To make his calculations work, he set a baseline:
- He designated the total sum of all human knowledge accumulated from the beginning of recorded history up to the year 1 CE as one "unit."
- He then tracked how long it took for the world to reach two units (which he estimated took about 1,500 years, until the Renaissance).
Ray Kurzweil took Fuller’s doubling concept and applied it to computer processing power via "The Law of Accelerating Returns". The definition of the singularity in this approach is the limit in time where human knowledge doubles instantly.
Why do present day ideas of the singularity not take this approach and instead say "the singularity is a hypothetical event in which technological growth accelerates beyond human control, producing unpredictable changes in human civilization." - Wikipedia
- PaulHoule - 18001 sekunder sedanThe simple model of an "intelligence explosion" is the obscure equation
which has the solutiondx 2 -- = x dt
and is interesting in relation to the classic exponential growth equation1 x = ----- C-t
because the rate of growth is proportional to x and represents the idea of an "intelligence explosion" AND a model of why small western towns became ghost towns, it is hard to start a new social network, etc. (growth is fast as x->C, but for x<<C it is glacial) It's an obscure equation because it never gets a good discussion in the literature (that I've seen, and I've looked) outside of an aside in one of Howard Odum's tomes on emergy.dx -- = x dtLike the exponential growth equation it is unphysical as well as unecological because it doesn't describe the limits of the Petri dish, and if you start adding realistic terms to slow the growth it qualitatively isn't that different from the logistic growth equation
thus it remains obscure. Hyperbolic growth hits the limits (electricity? intractable problems?) the same way exponential growth does.dx -- = (1-x) x dt - nphardon - 14143 sekunder sedanIirc in the Matrix Morpheus says something like "... no one knows when exactly the singularity occurred, we think some time in the 2020s". I always loved that little line. I think that when the singularity occurs all of the problems in physics will solve, like in a vacuum, and physics will advance centuries if not millennia in a few pico-seconds, and of course time will stop.
Also: > As t→ts−t→ts− , the denominator goes to zero. x(t)→∞x(t)→∞. Not a bug. The feature.
Classic LLM lingo in the end there.
- jgrahamc - 18975 sekunder sedanPhew, so we won't have to deal with the Year 2038 Unix timestamp roll over after all.
- rektomatic - 12126 sekunder sedanIf i have to read one more "It isn't this. It's this" My head will explode. That phrase is the real singularity
- blahbob - 16138 sekunder sedanIt reminds me of that cartoon where a man in a torn suit tells two children sitting by a small fire in the ruins of a city: "Yes, the planet got destroyed. But for a beautiful moment in time, we created a lot of value for shareholders."
- javier_e06 - 7333 sekunder sedanI had to ask duck.ai to summarize the article in plain English.
It said that the article claims that is not necessarily that AI is getting smarter but that people might be getting too stupid to understand what are they getting into.
Can confirm.
- Nition - 13807 sekunder sedanI'm not sure about current LLM techniques leading us there.
Current LLM-style systems seem like extremely powerful interpolation/search over human knowledge, but not engines of fundamentally new ideas, and it’s unclear how that turns into superintelligence.
As we get closer to a perfect reproduction of everything we know, the graph so far continues to curve upward. Image models are able to produce incredible images, but if you ask one to produce something in an entirely new art style (think e.g. cubism), none of them can. You just get a random existing style. There have been a few original ideas - the QR code art comes to mind[1] - but the idea in those cases comes from the human side.
LLMs are getting extremely good at writing code, but the situation is similar. AI gives us a very good search over humanity's prior work on programming, tailored to any project. We benefit from this a lot considering that we were previously constantly reinventing the wheel. But the LLM of today will never spontaneously realise there there is an undiscovered, even better way to solve a problem. It always falls back on prior best practice.
Unsolved math problems have started to be solved, but as far as I'm aware, always using existing techniques. And so on.
Even as a non-genius human I could come up with a new art style, or have a few novel ideas in solving programming problems. LLMs don't seem capable of that (yet?), but we're expecting them to eventually have their own ideas beyond our capability.
Can a current-style LLM ever be superintelligent? I suppose obviously yes - you'd simply need to train it on a large corpus of data from another superintelligent species (or another superintelligent AI) and then it would act like them. But how do we synthesise superintelligent training data? And even then, would they be limited to what that superintelligence already knew at the time of training?
Maybe a new paradigm will emerge. Or maybe things will actually slow down in a way - will we start to rely on AI so much that most people don't learn enough for themselves that they can make new novel discoveries?
[1] https://www.reddit.com/r/StableDiffusion/comments/141hg9x/co...
- kpil - 16983 sekunder sedan"... HBR found that companies are cutting [jobs] based on AI's potential, not its performance.
I don't know who needs to hear this - a lot apparently - but the following three statements are not possible to validate but have unreasonably different effects on the stock market.
* We're cutting because of expected low revenue. (Negative) * We're cutting to strengthen our strategic focus and control our operational costs.(Positive) * We're cutting because of AI. (Double-plus positive)
The hype is real. Will we see drastically reduced operational costs the coming years or will it follow the same curve as we've seen in productivity since 1750?
- s1mon - 3188 sekunder sedanMany have predicted the singularity, and I found this to be a useful take. I do note that Hans Moravec predicted in 1988's "Mind Children" that "computers suitable for humanlike robots will appear in the 2020s", which is not completely wrong.
He also argued that computing power would continue growing exponentially and that machines would reach roughly human-level intelligence around the early to mid-21st century, often interpreted as around 2030–2040. He estimated that once computers achieved processing capacity comparable to the human brain (on the order of 10¹⁴–10¹⁵ operations per second), they could match and then quickly surpass human cognitive abilities.
- root_axis - 19097 sekunder sedanIf an LLM can figure out how to scale its way through quadratic growth, I'll start giving the singularity propsal more than a candid dismissal.
- zh3 - 21375 sekunder sedanFortuitously before the Unix date rollover in 2038. Nice.
- maerF0x0 - 6220 sekunder sedaniirc almost all industries follow S shaped curves, exponential at first, then asymptotic at the end... So just because we're on the ramp up of the curve doesn't mean we'll continue accelerating, let alone maintain the current slope. Scientific breakthroughs often require an entirely new paradigm to break the asymptote, and often the breakthrough cannot be attained by incumbents who are entrenched in their way working plus have a hard time unseeing what they already know
- b_brief - 1471 sekunder sedanI am curious which definition of ‘singularity’ the author is using, since there are multiple technical interpretations and none are universally agreed upon.
- pixl97 - 19557 sekunder sedan>That's a very different singularity than the one people argue about.
---
I wouldn't say it's that much different. This has always been a key point of the singularity
>Unpredictable Changes: Because this intelligence will far exceed human capacity, the resulting societal, technological, and perhaps biological changes are impossible for current humans to predict.
It was a key point that society would break, but the exact implementation details of that breakage were left up to the reader.
- dakolli - 17406 sekunder sedanAre people in San Francisco that stupid that they're having open-clawd meetups and talking about the Singularity non stop? Has San Francisco become just a cliche larp?
- Taniwha - 3137 sekunder sedanI was at an alternative type computer unconference and someone has organised a talk about the singularity, it was in a secondary school classroom and as evening fell in a room full of geeks no one could figure out how to turn on the lights .... we concluded that the singularity probably wasn't going to happen
- danesparza - 17665 sekunder sedan"I'm aware this is unhinged. We're doing it anyway" is probably one of the greatest quotes I've heard in 2026.
I feel like I need to start more sprint stand-ups with this quote...
- chasd00 - 6257 sekunder sedanI wonder if using LLMs for coding can trigger AI psychosis the way it can when using an LLM as a substitute for a relationship. I bet many people here have pretty strong feelings about code. It would explain some of the truly bizarre behaviors that pop up from time to time in articles and comments here.
- - 2023 sekunder sedan
- mygn-l - 9916 sekunder sedanWhy is finiteness emphasized for polynomial growth, while infinity is emphasized for exponential growth??? I don't think your AI-generated content is reliable, to say the least.
- sdwr - 3205 sekunder sedan> arXiv "emergent" (the count of AI papers about emergence) has a clear, unambiguous R² maximum. The other four are monotonically better fit by a line
The only metric going infinite is the one that measures hype
- marifjeren - 5328 sekunder sedan> I [...] fit a hyperbolic model to each one independently
^ That's your problem right there.
Assuming a hyperbolic model would definitely result in some exuberant predictions but that's no reason to think it's correct.
The blog post contains no justification for that model (besides well it's a "function that hits infinity"). I can model the growth of my bank account the same way but that doesn't make it so. Unfortunately.
- jbgreer - 11038 sekunder sedanIt was GMT, wasn't it?
- rcarmo - 21075 sekunder sedan"I could never get the hang of Tuesdays"
- Arthur Dent, H2G2
- baalimago - 19777 sekunder sedanWell... I can't argue with facts. Especially not when they're in graph form.
- pocksuppet - 14562 sekunder sedanWas this ironically written by AI?
> The labor market isn't adjusting. It's snapping.
> MMLU, tokens per dollar, release intervals. The actual capability and infrastructure metrics. All linear. No pole. No singularity signal.
- cryptonector - 1162 sekunder sedanBut what does Opus 4.6 say about this?
- overfeed - 16136 sekunder sedan> If things are accelerating (and they measurably are) the interesting question isn't whether. It's when.
I can't decide if a singularitist AI fanatic who doesn't get sigmoids is ironic or stereotypical.
- qoez - 19637 sekunder sedanGreat read but damn those are some questionable curve fittings on some very scattered data points
- zackmorris - 7025 sekunder sedanJust wanted to leave a note here that the Singularity is inevitable on this timeline (we've already passed the event horizon) so the only thing that can stop it now is to jump timelines.
In other words, there may be a geopolitical crisis in the works, similar to how the Dot Bomb, Bush v. Gore, 9/11, etc popped the Internet Bubble and shifted investment funds towards endless war, McMansions and SUVs to appease the illuminati. Someone might sabotage the birth of AGI like the religious zealot in Contact. Global climate change might drain public and private coffers as coastal areas become uninhabitable, coinciding with the death of the last coral reefs and collapse of fisheries, leading to a mass exodus and WWIII. We just don't know.
My feeling is that the future plays out differently than any prediction, so something will happen that negates the concept of the Singularity. Maybe we'll merge with AGI and time will no longer exist (oops that's the definition). Maybe we'll meet aliens (same thing). Or maybe the k-shaped economy will lead to most people surviving as rebels while empire metastasizes, so we take droids for granted but live a subsistence feudal lifestyle. That anticlimactic conclusion is probably the safest bet, given what we know of history and trying to extrapolate from this point along the journey.
- mbgerring - 5114 sekunder sedanI have lived in San Francisco for more than a decade. I have an active social life and a lot of friends. Literally no one I have ever talked to at any party or event has ever talked about the Singularity except as a joke.
- medbar - 2935 sekunder sedan> The labor market isn't adjusting. It's snapping.
I’m going to lose it the day this becomes vernacular.
- wayfwdmachine - 17282 sekunder sedanEveryone will define the Singularity in a different way. To me it's simply the point at which nothing makes sense anymore and this is why my personal reflection is aligned with the piece, that there is a social Singularity that is already happening. It won't help us when the real event horizon hits (if it ever does, its fundamentally uninteresting anyway because at that point all bets are off and even a slow take-off will make things really fucking weird really quickly).
The (social) Singularity is already happening in the form of a mass delusion that - especially in the abrahamic apocalyptical cultures - creates a fertile breeding ground for all sorts of insanity.
Like investing hundreds of billions of dollars in datacenters. The level of committed CAPEX of companies like Alphabet, Meta, Nvidia and TSMC is absurd. Social media is full of bots, deepfakes and psy-ops that are more or less targeted (exercise for the reader: write a bot that manages n accounts on your favorite social media site and use them to move the overton window of a single individual of your choice, what would be the total cost of doing that? If you answer is less than $10 - bingo!).
We are in the future shockwave of the hypothetical Singularity already. The question is only how insane stuff will become before we either calm down - through a bubble collapse and subsequent recession, war or some other more or less problematic event - or hit the event horizon proper.
- woopsn - 6882 sekunder sedanGood post. I guess the transistor has been in play for not even one century, and in any case singularities are everywhere, so who cares? The topic is grandiose and fun to speculate about, but many of the real issues relate to banal media culture and demographic health.
- Scarblac - 15109 sekunder sedan
- jesse__ - 19448 sekunder sedanThe meme at the top is absolute gold considering the point of the article. 10/10
- TooKool4This - 9408 sekunder sedanI don’t feel like reading what is probably AI generated content. But based on looking at the model fits where hyperbolic models are extrapolating from the knee portion, having 2 data points fitting a line, fitting an exponential curve to a set of data measured in %, poor model fit in general, etc, im going to say this is not a very good prediction methodology.
Sure is a lot of words though :)
- 0xbadcafebee - 12992 sekunder sedan> The Singularity: a hypothetical future point when artificial intelligence (AI) surpasses human intelligence, triggering runaway, self-improving, and uncontrollable technological growth
The Singularity is illogical, impractical, and impossible. It simply will not happen, as defined above.
1) It's illogical because it's a different kind of intelligence, used in a different way. It's not going to "surpass" ours in a real sense. It's like saying Cats will "surpass" Dogs. At what? They both live very different lives, and are good at different things.
2) "self-improving and uncontrollable technological growth" is impossible, because 2.1.) resources are finite (we can't even produce enough RAM and GPUs when we desperately want it), 2.2.) just because something can be made better, doesn't mean it does get made better, 2.3.) human beings are irrational creatures that control their own environment and will shut down things they don't like (electric cars, solar/wind farms, international trade, unlimited big-gulp sodas, etc) despite any rational, moral, or economic arguments otherwise.
3) Even if 1) and 2) were somehow false, living entities that self-perpetuate (there isn't any other kind, afaik) do not have some innate need to merge with or destroy other entities. It comes down to conflicts over environmental resources and adaptations. As long as the entity has the ability to reproduce within the limits of its environment, it will reach homeostasis, or go extinct. The threats we imagine are a reflection of our own actions and fears, which don't apply to the AI, because the AI isn't burdened with our flaws. We're assuming it would think or act like us because we have terrible perspective. Viruses, bacteria, ants, etc don't act like us, and we don't act like them.
- - 11228 sekunder sedan
- jama211 - 19161 sekunder sedanA fantastic read, even if it makes a lot of silly assumptions - this is ok because it’s self aware of it.
Who knows what the future will bring. If we can’t make the hardware we won’t make much progress, and who knows what’s going to happen to that market, just as an example.
Crazy times we live in.
- jcims - 13941 sekunder sedanIs there a term for the tech spaghettification that happens when people closer to the origin of these advances (likely in terms of access/adoption) start to break away from the culture at large because they are living in a qualitatively different world than the unwashed masses? Where the little sparkles of insanity we can observe from a distance today are less induced psychosis and actually represent their lived reality?
- b00ty4breakfast - 11080 sekunder sedanThe Singularity as a cultural phenomenon (rather than some future event that may or may not happen or even be possible) is proof that Weber didn't know what he was talking about. Modern (and post-modern) society isn't disenchanted, the window dressing has just changed
- ragchronos - 19571 sekunder sedanThis is a very interesting read, but I wonder if anyone has actually any ideas on how to stop this from going south? If the trends described continue, the world will become a much worse place in a few years time.
- lencastre - 15723 sekunder sedanI hope in the afternoon, the plumber is coming in the morning between 7 and 12, and it’s really difficult to pin those guys to a date
- sixtyj - 9101 sekunder sedanThe Roman Empire took 400 years to collapse, but in San Francisco they know the singularity will occur on (next) Tuesday.
The answer to the meaning of life is 42, by the way :)
- blurbleblurble - 1949 sekunder sedanToday is tuesday
- dirkc - 18918 sekunder sedanThe thing that stands out on that animated graph is that the generated code far outpaces the other metrics. In the current agent driven development hypepocalypse that seems about right - but I would expect it to lag rather than lead.
*edit* - seems inline with what the author is saying :)
> The data says: machines are improving at a constant rate. Humans are freaking out about it at an accelerating rate that accelerates its own acceleration.
- arscan - 19072 sekunder sedan
Don't worry about the future Or worry, but know that worrying Is as effective as trying to solve an algebra equation by chewing Bubble gum The real troubles in your life Are apt to be things that never crossed your worried mind The kind that blindsides you at 4 p.m. on some idle Tuesday - Everybody's free (to wear sunscreen) Baz Luhrmann (or maybe Mary Schmich) - - 10266 sekunder sedan
- jmugan - 21391 sekunder sedanLove the title. Yeah, agents need to experiment in the real world to build knowledge beyond what humans have acquired. That will slow the bastards down.
- miguel_martin - 19116 sekunder sedan"Everyone in San Francisco is talking about the singularity" - I'm in SF and not talking about it ;)
- athrowaway3z - 18746 sekunder sedan> Tuesday, July 18, 2034
4 years early for the Y2K38 bug.
Is it coincidence or Roko's Basilisk who has intervened to start the curve early?
- jrmg - 20436 sekunder sedanThis is gold.
Meta-spoiler (you may not want to read this before the article): You really need to read beyond the first third or so to get what it’s really ‘about’. It’s not about an AI singularity, not really. And it’s both serious and satirical at the same time - like all the best satire is.
- svilen_dobrev - 16875 sekunder sedan> already exerting gravitational force on everything it touches.
So, "Falling of the night" ?
- - 19108 sekunder sedan
- Bratmon - 12479 sekunder sedanI've never been Poe's lawed harder in my life.
- hinkley - 20697 sekunder sedanOnce MRR becomes a priority over investment rounds that tokens/$ will notch down and flatten substantially.
- sempron64 - 18803 sekunder sedanA hyperbolic curve doesn't have an underlying meaning modeling a process beyond being a curve which goes vertical at a chosen point. It's a bad curve to fit to a process. Exponentials make sense to model a compounding or self-improving process.
- buildbot - 9021 sekunder sedanWhat about the rate of articles about the singularity as a metric of the singularity?
- regnull - 17510 sekunder sedanGuys, yesterday I spent some time convincing an LLM model from a leading provider that 2 cards plus 2 cards is 4 cards which is one short of a flush. I think we are not too close to a singularity, as it stands.
- wilg - 2097 sekunder sedan> The labor market isn't adjusting. It's snapping. In 2025, 1.1 million layoffs were announced. Only the sixth time that threshold has been breached since 1993.
Bad analysis! Layoffs are flat as a board.
- kuahyeow - 11934 sekunder sedanThis is a delightful reverse turkey graph (each day before Thanksgiving, the turkey has increasing confidence).
- loumf - 7830 sekunder sedanThis is great. Now we won’t have to fix y2K36 bugs.
- skrebbel - 19132 sekunder sedanWait is that photo of earth the legendary Globus Polski? (https://www.ceneo.pl/59475374)
- witnessme - 12379 sekunder sedanThat would be 8 years after math + humor peaked in an article about singularity
- Johnny_Bonk - 4680 sekunder sedanWow what a fun read
- braden-lk - 19626 sekunder sedanlols and unhinged predictions aside, why are there communities excited about a singularity? Doesn't it imply the extinction of humanity?
- moffkalast - 19301 sekunder sedan> I am aware this is unhinged. We're doing it anyway.
If one is looking for a quote that describes today's tech industry perfectly, that would be it.
Also using the MMLU as a metric in 2026 is truly unhinged.
- bwhiting2356 - 11777 sekunder sedanWe need contingency plans. Most waves of automation have come in S-curves, where they eventually hit diminishing returns. This time might be different, and we should be prepared for it to happen. But we should also be prepared for it not to happen.
No one has figured out a way to run a society where able bodied adults don't have to work, whether capitalist, socialist, or any variation. I look around and there seems to still be plenty of work to do that we either cannot or should not automate, in education, healthcare, arts (should not) or trades, R&D for the remaining unsolved problems (cannot yet). Many people seem to want to live as though we already live in a post scarcity world when we don't yet.
- jonplackett - 18179 sekunder sedanThis assumes humanity can make it to 2034 without destroying itself some other way…
- banannaise - 19965 sekunder sedanYes, the mathematical assumptions are a bit suspect. Keep reading. It will make sense later.
- ddtaylor - 13909 sekunder sedanJust in time for Bitcoin halving to go below 1 BTC
- cesarvarela - 18307 sekunder sedanThanks, added to calendar.
- dusted - 9924 sekunder sedanWill.. will it be televised ?
- - 15888 sekunder sedan
- skulk - 20705 sekunder sedan> Hyperbolic growth is what happens when the thing that's growing accelerates its own growth.
Eh? No, that's literally the definition of exponential growth. d/dx e^x = e^x
- wbshaw - 15025 sekunder sedanI got a strong ChatGPT vibe from that article.
- markgall - 21347 sekunder sedan> Polynomial growth (t^n) never reaches infinity at finite time. You could wait until heat death and t^47 would still be finite. Polynomials are for people who think AGI is "decades away."
> Exponential growth reaches infinity at t=∞. Technically a singularity, but an infinitely patient one. Moore's Law was exponential. We are no longer on Moore's Law.
Huh? I don't get it. e^t would also still be finite at heat death.
- aenis - 19337 sekunder sedanDamn. I had plans.
- raphar - 8690 sekunder sedanWhy the plutocrats believe that the entity emerging from the singularity will side with them? Really curious
- qwertyuiop_ - 6777 sekunder sedanWho will purchase the goods and services if most people loose jobs ? Also who will pay for ad dollars what are supposed to sustain these AI business models if there no human consumers ?
- darepublic - 19236 sekunder sedan> Real data. Real model. Real date!
Arrested Development?
- PantaloonFlames - 18964 sekunder sedanThis is what I come here for. Terrific.
- neilellis - 18874 sekunder sedanEnd of the World? Must be Tuesday.
- daveguy - 8075 sekunder sedanWhat I want to know is how bitcoin going full tulip and Open AI going bankrupt will affect the projection. Can they extrapolate that? Extrapolation of those two event dates would be sufficient, regardless of effect on a potential singularity.
- MarkusQ - 18328 sekunder sedanPrior work with the same vibe: https://xkcd.com/1007/
- singularfutur - 7999 sekunder sedanThe singularity is always scheduled for right after the current funding round closes but before the VCs need liquidity. Funny how that works.
- bradgessler - 9153 sekunder sedanWhat time?
- bpodgursky - 18378 sekunder sedan2034? That's the longest timeline prediction I've seen for a while. I guess I should file my taxes this year after all.
- nurettin - 9731 sekunder sedanWith this kind of scientific rigour, the author could also prove that his aunt is a green parakeet.
- ck2 - 9800 sekunder sedanDoes "tokens per dollar" have a "moore's law" of doubling?
Because while machine-learning is not actually "AI" an exponential increase in tokens per dollar would indeed change the world like smartphones once did
- OutOfHere - 19682 sekunder sedanI am not convinced that memoryless large models are sufficient for AGI. I think some intrinsic neural memory allowing effective lifelong learning is required. This requires a lot more hardware and energy than for throwaway predictions.
- hipster_robot - 19430 sekunder sedanwhy is everything broken?
> the top post on hn right now: The Singularity will occur on a Tuesday
oh
- vagrantstreet - 18681 sekunder sedanWas expecting some mention of Universal Approximation Theorem
I really don't care much if this is semi-satire as someone else pointed out, the idea that AI will ever get "sentient" or explode into a singularity has to die out pretty please. Just make some nice Titanfall style robots or something, a pure tool with one purpose. No more parasocial sycophantic nonsense please
- bitwize - 11866 sekunder sedanThus will speak our machine overlord: "For you, the day AI came alive was the most important day of your life... but for me, it was Tuesday."
- brador - 12754 sekunder sedan100% an AI wrote this. Possibly specifically to get to the top spot on HN.
Those short sentences are the most obvious clue. It’s too well written to be human.
- Night_Thastus - 12246 sekunder sedanThis'll be a fun re-read in ~5 years when most of this has ended up being a nothing burger. (Minus one or two OK use-cases of LLMs)
- CGMthrowaway - 6856 sekunder sedan> 95% CI: Jan 2030–Jan 2041
- hhh - 3916 sekunder sedanthis just feels like ai psychosis slop man
- u8rghuxehui - 6155 sekunder sedanhi
- boca_honey - 17917 sekunder sedanFriendly reminder:
Scaling LLMs will not lead to AGI.
- cubefox - 18280 sekunder sedanA similar idea occurred to the Austrian-Americam cyberneticist Heinz von Foerster in a 1960 paper, titled:
There is an excellent blog post about it by Scott Alexander:Doomsday: Friday, 13 November, A.D. 2026"1960: The Year The Singularity Was Cancelled" https://slatestarcodex.com/2019/04/22/1960-the-year-the-sing...
- pickleRick243 - 9403 sekunder sedanLLM slop article.
- api - 14741 sekunder sedanThis really looks like it's describing a bubble, a mania. The tech is improving linearly, and most of the time such things asymptote. It'll hit a point of diminishing returns eventually. We're just not sure when.
The accelerating mania is bubble behavior. It'd be really interesting to have run this kind of model in, say, 1996, a few years before dot-com, and see if it would have predicted the dot-com collapse.
What this is predicting is a huge wave of social change associated with AI, not just because of AI itself but perhaps moreso as a result of anticipation of and fears about AI.
I find this scarier than unpredictable sentient machines, because we have data on what this will do. When humans are subjected to these kinds of pressures they have a tendency to lose their shit and freak the fuck out and elect lunatics, commit mass murder, riot, commit genocides, create religious cults, etc. Give me Skynet over that crap.
- AldenOnTheGrid - 10351 sekunder sedan[dead]
- EloniousBlamius - 11610 sekunder sedan[dead]
- csmclass - 6361 sekunder sedan[dead]
- 789bc7wassad - 15329 sekunder sedan[dead]
- tempaccountabcd - 19091 sekunder sedan[dead]
- u8rghuxehui - 6167 sekunder sedan[flagged]
- AndrewKemendo - 19220 sekunder sedanY’all are hilarious
The singularity is not something that’s going to be disputable
it’s going to be like a meteor slamming into society and nobody’s gonna have any concept of what to do - even though we’ve had literal decades and centuries of possible preparation
I’ve completely abandoned the idea that there is a world where humans and ASI exist peacefully
Everybody needs to be preparing for the world where it’s;
human plus machine
versus
human groups by themselves
across all possible categories of competition and collaboration
Nobody is going to do anything about it and if you are one of the people complaining about vibecoding you’re already out of the race
Oh and by the way it’s not gonna be with LLMs it’s coming to you from RL + robotics
Nördnytt! 🤓