pleroma.debian.social

pleroma.debian.social

A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

“Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you.

@jzb Interesting thought experiment. Thanks for sharing that!

@jzb It's important to note though that you also paint a different possible reality: one in which we don't have to fear automation, but would get to welcome it.

Automation shouldn't be bad.

Wouldn't change the inane energy requirements as implemented currently. But it's not the tech that's necessarily evil: it is the people driving it.

@jzb I agree with you, but I also say, "Why not both?" While one is not good, the two can coexist.

What I think the AI Boosters and AI Detractors forget is that there is a group of folks quietly using this as a tool to retake their lives. Take a look at the various "over employed" Reddits as an example.

This doesn't get rid of the valid criticisms of the AI companies and resource concerns.

This basically?

@matthewcroughan No. That's "wow, look at how you can overload your workers when they say 'I can't be in 3 places at once'."

They're not trying to sell Copilot to the person in the picture, they're trying to sell it to their bosses and trying to sell the productivity glory story to go along with it.

@bexelbie From my POV the answer to "why not both?" is that you can't really separate them right now.

Adoption of the commercial tools for whatever purpose does more to pave the way to the negative outcomes than any positive ones.

I think the "overemployed" thing is more of a statistical anomaly than a real thing.

Perhaps I'm just old and inflexible, though. Ideologically, I mean. I know I'm not very flexible physically these days...

@larsmb @jzb yes, if society were structured entirely differently, total automation of labor should be the goal. With survival largely tied to wages + access to capital, this LLM trend is an attempted death march for working people.

If it was inverted as @jzb mentioned, the previous trend of improving the performance of small models would get more attention. Small, edge/local, efficient models. It still can in the future.

I'm not saying anything new, been said a thousand times.

@jzb you had me at "Microsoft SlopGuard" :)

@jzb Your 4am sleep deprived state was bang on 💥💯

@larsmb @jzb
I keep having this conversation with my husband. He is of the opinion that a”useful tool” should be used at its capability, not pushing it too far. For example, he is writing a thesis, he has all the information together and now he wants to use Ai to take his sources and find where in his document he has derived the information from. He likes to use the example of “a personal libraian”

@larsmb @jzb
I think if it’s a “closed system” where you feed it information and tell it to only use the information it has and say when it “comes up empty” it should be okay. And to speed up the process of citations that does seem useful. (He would also double check on the accuracy of it , like say if it says something is on page 34 it should be there otherwise it’s not valid)

@em_and_future_cats @jzb LLMs are notoriously bad at both using only the information they have, all the information they have, and telling whether they did.

So yes, can be useful, but every answer needs to be validated against facts.

@larsmb @em_and_future_cats Well, as designed, they are -- I'm not sure that's a built-in limitation of LLMs or not. To be fair, I am not an expert on the tech.

As something of an aside...

It would be really interesting if you could pair the natural language instruction input with predictable output.

That is, for example -- if I could query, say, all the data in Wikipedia but get only accurate output. Or if you had something like Ansible with natural-language playbook creation.

"Hey, Ansible -- I want a playbook that will install all of the packages I have currently installed and retain my dotfiles" (or something) and be guaranteed accurate output... that would be amazing.

Except that I also worry about losing skills to do those things. I worry about the loss of incidental knowledge when researching if a computer can return *only* what you ask for and sacrifice accidental discovery.

(I also still think search engines were something of a mistake and miss Internet directories. Yeah, I'm fun at parties....)

@larsmb @jzb
I just worry that the younger generation will not understand how & when to use it & when to use their brains 🫩 . In the science realm, Ai can be useful for a lot of data heavy and number crunching stuff, but there must be a limit so that students can understand what & why these things happen. (And regulations so hallucinations are limited)
The humanities are a no-go zone imo, the use of a closed system citation mechanism is probably the only exception for this.

@jzb that’s a whole lot of text to say the problem is capitalism

@jzb @larsmb
This too! Granted, if you’ve got to the phd level through education before llms you are probably okay with using it to “finish up” but I really worry about younger generations (even myself) when it comes to all of this

@jzb Is is an inherent limitation of how LLMs currently exist and are implemented.
They do strive to minimize it through scale, but it's also a reason why they do get "creative" in their answers.
Like with any stochastic algorithm, they perform best if you can (cheaply) validate the result. e.g., does a program pass the tests still?

This is much harder for complex questions about the real world.

@em_and_future_cats

@jzb
Yeah, no. It's the same theme Marx recognized some 150 years ago:

John Stuart Mill says in his “Principles of Political Economy":
“It is questionable if all the mechanical inventions yet made have lightened the day’s toil of any human being.”
That is, however, by no means the aim of the capitalistic application of machinery. Like every other increase in the productiveness of labour, machinery is intended to cheapen commodities, and, by shortening that portion of the working-day, in which the labourer works for himself, to lengthen the other portion that he gives, without an equivalent, to the capitalist. In short, it is a means for producing surplus-value.

[Capital, IV.15]

@jzb On the plus side, Ansible (because it's so freaking widespread and well documented, and it is mostly fairly easy to tell if the answer would do the thing one asked for) is a fairly successful area to apply GenAI to.
Combine with ansible-lint, shellcheck etc in the precommit hook, and the results are actually rather impressive.

@jzb

“Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

This is already happening. So what’s the point? ;)

Back to business: You are right. Every person who is "just doing the job" is endangered losing exactly this job, as AI will do it better and more efficiently. So the solution is to have a society of individuals who are smart enough to cope with it in an intelligent way. If not, the tech bros might win for a while, before all collapses.

@jzb “Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers” — heh, most of non-nerd people who works remotely that I know already using LLMs for this exact purpose without any marketing

@jzb This would be happening if LLMs actually worked as advertised

@jzb that’s fair. I think it’s impossible for any tool to not have both a worker freedom use and a worker subjugation use. It often depends on who gets there first and is correlated with privilege.

@jzb Ayup. If all the “AI will replace humans” pushers were also coming up with plans for UBI at a decent level or some kind of post money Star Trek future…

Well, I’d still call them nuts because the tech is nowhere near good enough, but at least it would be a plan.

But you never hear anything about the human side of things and a few billion people are not just going away.

So far then, the idea seems to be the usual fuck you I’m ok of the looting class.

@jzb i think the problem is more that workers have much greater work ethics than generally acknowledged, and if a tool allow them to work faster, they'll do more work, not not reclaim more time.

but more than une explanation can be true at the same time.

The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

I feel like we actually did briefly see this early on when basically the first actual real-world use-case was students automating bullshit papers.

(Certainly, they reached for plagiarism, the correct word, but leveled at the students and not at all at the service provider.)

@jzb

@matt That's true, though I'm not sure I'd call using LLMs to do homework pro-worker, either. It's kind of a different tangent.

@jzb I've just asked ChatGPT to summarize your post.

It said: "If they were good for you, shareholders would hate them."

🙂

@MartinEscardo well played. I should’ve expected that, but in my defense… I was really tired. 😂

Because this is social media and not an in-person interaction, I should say that, of course, I didn't ask ChatGPT, just in case.

Of course @jzb already knew that.

@jzb You think what you say is not going to happen? Local LLMs will take over and what you envision will happen.

@jzb LOL at the idea that getting your work done means you can go home at noon or have a four-day weekend, rather than more work appearing on your desk.

@jzb What are you saying, that parts of the establishment defend other parts of the establishment? Yes, we know.

@jzb an interesting comparison is a 1970s show about the rise of the microprocessor ue 8080 that then had a discussion. The one person arguing it was good was the unions rep who correctly argued it would automate a load of tedious stuff and enable other work.
The difference this time is that generative AI doesn't do useful work Neural nets do and boring uses of the tech but not LLMs.

@etchedpixels @jzb I'm inclined to slightly disagree and think this is denialism (understandably, given how bad their ethics are; it'd be much easier if indeed they had no useful function).

The problem is that, despite all the scenarios where they're inappropriate and wrong, they do.

And we're unwilling (as a society) to fully consider their risks and costs, because "there's no glory in prevention".

That's the challenge we need to overcome.

@larsmb @jzb Agreed. I used the word "generally" for a reason. There are plenty of cases where both are appropriate parts of treatment.

@larsmb @etchedpixels @jzb in a $work context I've found llms quite good at automating what would otherwise be "find the plausible stack overflow answer and copy-paste it, changing the names" or "write a shit load of boilerplate" or "explain the awful mess that this module is and work out what it was supposed to be for" or even "do a refactor in less time than it would take me to figure out the LSP support in this language and do it myself".

All things that should not be useful if we'd collectively made better choices, but given where we are now have value in context

@dan @larsmb @jzb That's not really an LLM problem though - that's a very targetted problem being solved using an LLM as a large hammer, and a hammer that makes mistakes where formal methods and formal method dervied tools do not in general do

As to "find the plausible stack overflow answer and copy-paste it, changing the names", part of my job at Intel was catching people doing this and dropping them in the shit. Automated versus wilful human copyright violation 8)

@tshirtman @jzb
it's also that if people who send work your way learn that you get it done quickly and reliably, they'll send work your way more often

@wolf480pl @jzb yeah, but they do mostly rely on workers honesty to learn that.

And there is always more work to do, in my experience as a dev.

@tshirtman @jzb They rely on shit getting done to learn that.

If you finish a task early, that'll unblock your coworker who's been waiting for you to finish it, so they'll know you did it.

If you start a task late, sure, you spend less time doing things, but do you get to relax for the first half of day, knowing that you have a backlog of things to do?

@jzb

🔥

@jzb You make an excellent point, and also proving the fact that many of these tools simply do not work.

As for my own profession, the idea of replacing software engineers with energy hungry slop code machines is simply a way to cut down on staff during hard times, but making it look good to the stock market.

@larsmb

Side note: I'd call them anything but 'creative'.

If anything, the behavior is better described as 'evasive', since the model effectively keeps talking, without any substantial data backing up what's being conveyed.

Or, as Hicks, Humphries and Slater put it: They're bullshitting.

https://doi.org/10.1007/

@flberger @larsmb is that the correct DOI link?

@jzb
I'm going to leave an alternative idea. As a Marketer, the value I see in AI is: "it tells people what to think" it's the ideal media type for propaganda. If you can control what ChatGPT replies which I found out is very easy, you control their brains. Just teach them to use it for everything instead of thinking or searching the data. Imo currently, the models don't work so well to save labour time but work well enough to answer short random questions so people can use it as Search bar.

@jzb reminds me a lot of @pluralistic post on Centaurs and Reverse-Centaurs in AI automation tasks.

https://pluralistic.net/2025/09/11/vulgar-thatcherism/

Of course, your analysis I think goes a step further, because it imagines how the reaction would be if things had started differently.

@riverpunk oooh. Apparently I'm a centaur. Cool. @pluralistic

@jzb

I've just shown my partner who is a therapist how to use a LLM to write some of the bullshit reports the insurance companies make her fill in.

@jzb

> It would be really interesting if you could pair the natural language instruction input with predictable output.

It exists, it's mostly accurate and can learn from its mistakes. It's a bit expensive though. Its called a human assistant.

@larsmb @em_and_future_cats

@jzb as @pluralistic often says, the important part of any technology isn't what it does, but who it does it *for*, and who it does it *to*…

@jzb Imagine if AI was used to eliminate top executives and techbroligarchs who are mostly nothing but rentier parasites.

@jzb Bus drivers--put this module in the OBD2 port, tell it "Drive route 42 for 4 hours" and spend all morning getting some peace in the library.

@RupertReynolds likely result

A bus that has crashed into a wall. Taken on January 31, 2026, in Brussels.

replies
0
announces
0
likes
1

@jzb Oof!