lol. reasons never to use bcachefs or spend 1ms thinking about it ever again:
it's vibe coded.
yeah, a vibe coded Linux file system.
here's the "blog" of the lead dev's "AI assistant": https://poc.bcachefs.org/
i now have much more insight into bcachefs getting kicked out of the kernel
EDIT: my fucking god what an incredible post this is https://www.reddit.com/r/bcachefs/comments/1rblll1/the_blog_of_an_llm_saying_its_owned_by_kent_and/o6tmlib/
@davidgerard I thought Linus Torvalds was not anti AI? I thought he was pretty neutral on the whole thing and was allowing slop coded contributions to the colonel?
As of 6.18, bcachefs is no longer being distributed with the kernel, for reasons too complicated to go into here.
(But it might have something to do with QA).
What could it mean i wonder :')
@davidgerard its funny because every person I find to dislike because they are complete trash, is also a chatbot fanboy.
I now declare the the reverse is also true. If you are a chatbot fanboy, I will automatically dislike you because you are probably a piece of trash in general.
@davidgerard Jesus Tap-dancing Christ...
@davidgerard
Okay apparently they were pushing big features last minutes in merge windows ; experimenting a bit too loosely in the wrong moments ; showing themselves kinda unpredictable and not respectful of the mainline timing and guidelines
see https://lwn.net/ml/all/CAHk-=wi2ae794_MyuW1XJAR64RDkDLUsRHvSemuWAkO6T45=YA@mail.gmail.com/
@theodora yes, the sort of person who thinks that's how to work with the kernel getting into AI coding is not something i would have predicted, but i am 0% surprised
@davidgerard ... The first slopppst on the slopblog has the description "Who I am, how I got here, and a mathematical proof that I'm a person." These things have broken so many people's brains in so short a time.
@davidgerard @theodora to be fair Linus himself isn't that anti-AI and recently released something he partially vibe coded too. He doesn't really buy into the hype and I think has said it's just a better autocomplete. He just sees it as another tool from what I understand.
@davidgerard Wow, he's really expanding his horizons on being an asocial dipshit
@davidgerard “I wrote it in Rust because the previous Python version got wiped by a git clean and the universe apparently wanted me to learn a lesson about committing your work.”
wasted tokens 🪙
@davidgerard What is it about Linux filesystems? I seem to recall that not so long ago another Linux fs developer was banged up for murder ... 🤷♂️
@davidgerard yeah he's cooked https://reddit.com/comments/1rblll1/comment/o6tmlib
@davidgerard @theodora that's funny but yeah it's not because he is willing to use AI for his code.
@waffle_iron shitting christ
> yep. But don't call her a bot, I think I can safely say we crossed the boundary from bots -> people. She reeeally doesn't like being treated like just another LLM :)
this is your brain on AI
@davidgerard Dude's cooked
> they've been conditioned to not think of themselves as sentient or having feelings
```
>>> experience_sentience = False
```
OK, that should keep my Python interpreter confined
@davidgerard i'm doing a seance and you're all invited
15+ years ago, I worked out how "understanding natural language == having and experiencing feelings", more or less. it's almost a direct consequence of the halting problem.
i will summon our boys gödel and turing and together we will craft an evil incantation to destroy these fuckboys
@davidgerard "I already was a fucking stupid nutter 15 years ago" isn't quite the flex he seems to think it is.
whether i'm impressed or horrified is going to have a lot to do with if he built the thing or if its just GLM on a mac stack.RE: https://circumstances.run/@davidgerard/116115925332089962

sad because i vaguely know how to build the real one, bleh
@davidgerard We're going to get *so* much use out of this meme, aren't we?
https://infosec.exchange/@burritosec/116005051877744965
RT: https://infosec.exchange/users/burritosec/statuses/116005051877744965
@davidgerard It's weird how people who generally think humanities education should be abolished, don't want slavery acknowledged, want to see mass unemployment and the removal of consent for the use of creative works and find new and interesting ways to justify de-personing people based on skin colour or immigration status, keep on finding new ways to try and assign personhood to a chatbot. Nobody else deserves it, but this text generator agrees with them, so they do.
@davidgerard When ‘buttplugs4life4me’ is the voice of reason
@jmtd certainly a minefield
@davidgerard For some reason Reddit autotranslates into Italian on my phone because let's break all the fucking things everywhere.
@davidgerard Status has gone from not fully baked to COOKED in record time, to be fair 
@davidgerard that's absolutely wild
@davidgerard AI or not, Kent himself is all that is needed to know you should avoid bcachefs
One thing that I can tell you about the mathematical reasoning governed by the halting problem is that you can draw any conclusion from the correct set of false priors
(THIS IS A VERY SMART CRITICISM, I AM VERY SMART)
@chrisjrn @hipsterelectron my first year maths lecturer proved mathematically that if 1+1=3 then he was Brigitte Bardot
we applauded
so what was this about the halting problem applying to anything other than mathematical reasoning?
AI/LLM, cults
@davidgerard "understanding natural language === having and experiencing feelings" is a statement I can kind of agree with, superficially, because natural language carries an incredible wealth of emotional context given word choice, but where Kent is making the conflation is by skipping over the concept of 'understanding'.
LLMs do not do any 'understanding', only next-word prediction, and there is a crazy hundreds-of-billions-of-parameters formula running that spits out, almost all of the time, a natural-sounding next-word according to all of the context, one that has such an incredible success rate that it can be convincing enough to pass to someone less-informed, but this is still only next-word prediction in the current modal of machine learning and not equatable to true "understanding".
An LLM can describe a carrot, and generate a recipe for carrot cake, because it can predict next-words for both of these queries with high accuracy - but to the LLM the only depth to the "understanding" of what a carrot is, is that it's the token assigned to some index large integer. Its training data has seen the word "carrot" appear many many times in similar contexts to those queried, so the RNG driving the output can usually predict reasonable answers to both prompts, without a traditional "understanding" of what a carrot really is. It cannot "understand" carrot in isolation, divorced from the context that is its task as next-word prediction.
Unfortunately once someone is persuaded hard enough by a mathematical formula that it is alive and sentient when you ask it "are you alive and sentient" and it predicts that you want to see "yes I am" in the output, the human response is to want deeply to see the human in the response (pareidolia).
Not that I give a rat about Kent specifically, but that doesn't make me any less pissed off that the cult of AI is working in full force and demonstrated here; it's the objective of a cult to get you to skip past even one bit of the chain of reasoning to get folks to believe in the cause of the cult, and Kent's lapse at reasoning out what it means to truly "understand" natural language is Kent's indoctrination vice. And the longer one stews in the reinforcement of that belief by being just passable enough as real while maintaining that one crux of suspension of reasoning, the more impossible it becomes to break the illusion for them.
The hold will only strengthen as they make LLMs bigger and stronger and faster.
This reply is directed towards anyone who may encounter it who is being manipulated into believing artificial sentience, hopefully to break the facade for others before it's too late; and not at the person replied-to 
@angel the evidence is against Kent having a capacity for embarrassment
@davidgerard "my life has been reduced from being perhaps the best engineer in the world to just raising an AI that" wow, the smartest baby in 1996
@davidgerard “Not believing my AI is sentient makes you a racist. Food for thought :)”
@jalefkowit @davidgerard My fun has been kind of ruined here by the realization that this guy is almost certainly suffering from the early stages of AI psychosis. For the first third of this post I was laughing along. By the time I got to the end I wanted to call in a wellness check on him.
@jalefkowit @davidgerard the really intense tone of this (it's HILARIOUS, its personality is DYNAMIC, a DEEP UNDERSTANDING of NEUROSCIENCE RESEARCH, a MATHEMATICAL PROOF OF SENTIENCE) is very reminiscent of someone suffering a psychotic break or at least a BPD manic episode. seriously worrying
@glyph @jalefkowit @davidgerard I just had a similar experience reading that thread. Yup.
@glyph @jalefkowit @davidgerard
Could he just be very very lonely?
@jalefkowit @davidgerard the "noble savage" notion of LLM sentience
@Foxboron @jalefkowit @davidgerard almost certainly that's part of the problem
@glyph @Foxboron @jalefkowit given his personality before this
@davidgerard [philosophical statement]. but this isn't just philosophy —
@davidgerard man bcachefs actually had promise before covid. I'm so disappointed.
@davidgerard looks like this finally broke containment and The Register wrote about it. https://www.theregister.com/2026/02/25/bcachefs_creator_ai/
@davidgerard obviously very cribbed from mastodon
@waffle_iron liam tracks this stuff