AI-Assisted Cognition Endangers Human Development
- svnt - 3699 sekunder sedanIt is a quirky article but the author, instead of engaging with information sources to understand what important thoughts people have had about these topics, feels the best thing to do is introduce new terms that other terms already exist for. This is basically just inductive bias plus the AI homogenization idea producing a distribution shift.
This is what happens in thought-isolation. It isn’t better than educating yourself, whether that education involves AI or not.
Phillip Kitcher is known for epistemic monoculture, Dawkins and then Henrich popularized collective intelligence and cultural evolution.
The thing about these fear pieces is concepts like the hollowed mind are reductive and that reductionism is based on a reductive view of (usually other) people.
But what actually happens is we have formalized processes and can externalize them. This is a benefit if you can use your newfound capacity and free time for something better, which I think most people ultimately will.
- zozbot234 - 7122 sekunder sedanAt the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
- jbethune - 6866 sekunder sedanThis was a bit word-salad-y but I share the same basic concern. I think more I worry about the tendency toward greater and greater cognitive off-loading to LLMs. My sister told me a story the other day about how she caught her plumber using chatgpt on his phone to fix an issue with her bathroom. I just think it's good for humans to know how to do stuff.
- anigbrowl - 379 sekunder sedanSo does talking to uninformed people. The size of the group is inversely correlated with deviation from the mean (of IQ, productivity, or whatever proxy for cognitive capability you care to specify).
I'm not sure why this is at the top of the page; it's not that it's wrong, it's just a sequence of truisms.
- giancarlostoro - 991 sekunder sedanI think the best way I can put it is probably; this is the same as if you just cheat off someone else in school, you aren't learning much are you? AI is the same thing. Don't just cheat, use it to learn instead.
- dcre - 3401 sekunder sedanI've never seen an argument like this that, if true, wouldn't also apply to the cognitive offloading we do by relying on culture, by working with others, or working with the artifacts built by others.
- bomewish - 8005 sekunder sedanDoh. I went in expecting a really cool thesis — because the idea seems somehow intuitive, or at least really intriguing. But I have no clue what I read. Just totally odd and unconvincing. Greenland? Dialectal substrate? The idea is still super intriguing to me though!
- gobdovan - 3962 sekunder sedanBy the logic that today's news is fundamental to know as true, there really is no point in reading books older than 6 months old. If Einstein woke up from a coma, he'd be useless, as he doesn't even know who won the World Cup. For real now, if an AI can help you solve a problem using 2,000 years of human logic, does it really matter if it's "skewed" away from a political shift that happened three weeks ago?
I also don't believe that everybody I know is idiosyncratic in the way they view the world. And even if they were, I'd probably just pay attention to the things that are directly relevant to me. So probably I'll misunderstand most of what they say anyway.
- Manuel_D - 3499 sekunder sedan> In early 2026, the USA prepared to invade Greenland and, therefore, the EU4. Only a few months prior to that it was completely unthinkable that the USA would even think about threatening an invasion of Greenland. As AI base models are stuck in the past, they do not easily accept these events as real and often label them as “hypothetical”, “fake news”, or “impossible”. This also affects new models like Gemini 3 Pro, GLM-5 or GPT-5.3-codex5.
Isn't this just inherent to any system that takes some time to update? E.g. if a country moves its capital to a different city, then textbooks, maps, etc. are going to contain incorrect information for a while until updated editions are published.
A lot of the complaints about AI are really about the drawbacks of information systems more generally, and the failure modes pointed out are rarely novel. The "Cognitive Inbreeding" effect attributed to AI would also have occurred with Google search would it not? Lots of people type the same question into google and read the top results, instead of searching a more diverse set of information sources. It's interesting that the author mentions web search as a way to ameliorate this, when it seems to me that web search is just as capable of causing cognitive inbreeding.
- thepasch - 5966 sekunder sedanAI-assisted, I can see. I believe it doesn’t have to be that way, though. If you use AI as a grounding tool - essentially something that can take your stream of consciousness and parse it into a series of concerete and pointed search terms to do real-time research with instead of falling back on what’s in the weights - then it’s honestly hard to think of a technology that had the potential to be more useful in the history of the species - it gives you much more direct access to both your unknown unknowns and your unknown knowns.
That is, of course, provided that you pay attention it actually does research. In their current state, LLMs are practically useless for this purpose for the vast majority of users, as no one knows how they work, what to watch out for, what the failure modes look like, and how to keep nonsense apart from facts when both are presented with an equal amount of conviction. That’s not a user problem, it’s an education problem.
- drusepth - 2600 sekunder sedanThis is absolutely something to potentially be worried about, but one thing I never see highlighted in critiques of AI-assisted cognition is that some elements of physiology may not actually be biologically necessary if they can be fully supplanted by some replacement (in this case, new tools). I can't traverse as much land on foot as my ancestors did (my muscles are weaker, my endurance is less, etc), but I can travel even further than they could by car/plane/etc.
Nothing about the nature of evolution implies our current cognitive processing is ideal/sacred and shouldn't ever change.
- mayankd - 1983 sekunder sedanThe cognitive effects are going to be so divergent. While the avid learners will learn knowledge and skills on the fly exponentially faster, the populace offloading thinking to the AI models will see unprecedented cognitive decline. This is similar to the effect that the internet had on knowledge retention but this time on critical thinking
- MillionOClock - 5191 sekunder sedanSay someone uses AI, treating it as if it was a developer (probably not recommended today due to the risk of errors), and working and speaking with it as if they were some kind of product manager or senior engineer who only makes architectural decisions etc. I wonder what kind of difference would it really make? Sure the person might not be as good anymore as a developer, but how is this different from being a usual product manager or whatever the day AI truly is good enough for a developer role? I'm not saying I know what the answer to this question is, but this is something I genuinely wonder, and I think the same kind of questioning can apply to broader domains.
- darepublic - 2632 sekunder sedanThe original "person who most of humanity talked to" was, I reckon, google dot com
- YackerLose - 5129 sekunder sedanA real artificial intelligence would be capable of independent and original thought. What we have today are mere plagiarism factories. They need to be called out for what they are.
- adamtaylor_13 - 4174 sekunder sedanOne thing that's always been true with human communication that is becoming increasingly obvious to me through my interactions with LLMs is the art of asking a good question.
The framing of questions massively affects the results you get from discussion with humans, and I'd argue it's even more pronounced with LLMs.
- steve_adams_86 - 7988 sekunder sedan"Cognitive inbreeding" is an interesting (though maybe not entirely accurate) term for something I dislike a lot about LLMs. It really is a thing. You're recycling the same biases over and over, and it can be very difficult to tell if you don't review and distill the contents of your discourse with LLMs. Especially true if you're only using one.
I do think there's a solution to this—kind of—which dramatically reduces the probability and allowing for broad inductive biases. And that's to ask question with narrower scopes, and to ensure you're the one driving conversation.
It's true with programming as well. When you clearly define what you need and how things should be done, the biases are less evident. When you ask broad questions and only define desired outcomes in ambiguous terms, biases will be more likely to take over.
When people ask LLMs to build the world, they will do it in extremely biased ways. This makes sense. When you ask it specifics about narrow topics, this is still be a problem, but greatly mitigated.
I suppose what's happening is an inversion of cognitive load, so the human is taking on more and selecting bias such that the LLM is less free to do so. This is roughly in line with the article's premise (maybe not the entire article, though), which is fine; I think I generally agree that these are cognitive muscles that need exercising, and allowing an LLM to do it all for you is potentially harmful. But I don't think we're trapped with the outcome, we do have agency, and with care it's a technology that can be quite beneficial.
- demorro - 5292 sekunder sedanThis Dynamic Dialectical Substrate sounds a lot like Pirsigs Metaphysics of Quality to me, which I think is neat.
- chunky1994 - 7316 sekunder sedanDoes anyone use LLMs in such a manner that they believe it always has the most up to date information (without web search tools?).
Isn't this whole thesis negated by the fact that tool calling web search exists? This just feels like a whole lot of words to say, don't treat a LLM as an always up to date infallible statistical predictor.
- blackqueeriroh - 3118 sekunder sedanThis is bad science. Horrifically bad science.
- cyanydeez - 1167 sekunder sedanDo we think AI is similar to being rich, but without all that cash? I mean, they can basically offload most things to other people to think about.
- contingencies - 3539 sekunder sedanStrong disagree. The "AI-Assisted Cognition" phrase is loaded.
Would you attempt to, for example, simultaneously modify for available ingredients, number of diners, and time-optimize the prep method for a recipe you've never cooked before if you were following an old-school cookbook? No. You'd have to be a pretty solid chef to try all that on at once.
Using AI, you might branch out confidently in to new areas, executing all of these modifications simultaneously, and even adapting the output for a specific audience or language.
This toy example shows an important property of AI as decision support systems, which are well studied in the military domain: using these systems, we build confidence to act in unfamiliar domains, thereby extending our reach. From this experience we can learn more. The fact that the learning may then occur through, ie. during or after the experience, rather than beforehand, is secondary. It's still there. The fact we didn't know the language the AI translated to for our chef is totally irrelevant.
Sitting comfortably at the effective apex of millions of years of human cognitive and technology development with the entire world's knowledge at our fingertips, every day we can extend confidence in novel domains through AI, and enjoy it. We should be feeling pretty damn "developed".
Rote formalism and fixed paths in pedagogy are gone: good riddance. This is the hacker age.
- measurablefunc - 3951 sekunder sedanCalculators endanger the development of mental arithmetic skills as well.
- SegfaultSeagull - 7412 sekunder sedanIt’s a bit ironic that the author includes an AI generated audio version of the article, you know, so we don’t have to read it.
- kazinator - 4392 sekunder sedan> Speaking and discussing with other humans [who aren't incessantly blathering about AI] is obviously the most effective way to mitigate these problems.
Slightly FTFY.
- cowlby - 7586 sekunder sedanSometimes it feels like as developers we live in a a bubble. Don't most jobs endanger human development? I can't help but think about all the billions of factory, food service, assembly line type jobs. Do these not threaten "human development"? My cynical take would be all AI endangers is "white collar" work.
- LetsGetTechnicl - 7042 sekunder sedanWell no shit
- greatpost - 5030 sekunder sedan[dead]
- throwaway613746 - 6611 sekunder sedan[dead]
- toooomato - 3749 sekunder sedan[dead]
- waffletower - 4787 sekunder sedan[dead]
Nördnytt! 🤓