pleroma.debian.social

pleroma.debian.social

@paul
LLMs have the potential to be useful. Any place where a computer needs to understand a human query and do something with that query other than generating an answer, is a place where I can see LLMs as a useful tool.

A tool that links to what it thinks as the correct answer in pre-existing documentation? Awesome (my bank does this, looping the conversation to a human if the answer was declared not helpful)
@sotolf @justin @rl_dane

A tool to interpret the spoken sentence "please give me light, I can't see" as a command to turn on the lights in the room? Yes, good idea (home assistant can do this).

Using generative AI to generate answers to questions is almost always wrong. But I do see some valid use cases...
@sotolf @justin @rl_dane @paul

@wouter @paul @rl_dane @justin There is no use case that you can't just make a better version of, you know what I do if I want light in my room, I go and flick the light switch...

@wouter @paul @rl_dane @justin

We have had tools for make computers execute human queries for decades, that's all what programming is. A computer is not sentient and cannot understand.

Also look at the hallucinations and the stupid shit the LLMs are doing, you don't get an answer, you get a likely answer to your query.

A tool that links to to the correct answer, you mean an FAQ?...

@wouter @paul @rl_dane @justin And the best use case you came up with is wasting so much energy, and plagiarize millions of books and stuff to.... turn on the light?....

@wouter @paul @rl_dane @justin Also who in the hell would be stupid enough to have every conversation in their room recorded, sent out of their house to be interpreted by a proprietary LLM to be able to use an arbitrary long winded impractical command to turn on the light, if you really wanted to there are things like clap sensors, or I'm sure you can have something local that can just interpret some word for it instead of just sending everything you say in your own home to the faschy tech-bros...

@sotolf @wouter @paul @rl_dane @justin given the number of people who use alexa/google assistant/siri i think you overestimated how much people care

@sotolf
No. The use case I gave was 'allow a hooman to query and command the computer in natural language'. I then gave 2 examples, one being home automation. It's not even the best one.

I have email going back several decades. I would like to have a locally-running LLM go over that archive and help me find that one conversation that I vaguely remember from years ago without remembering details.
@paul @rl_dane @justin

@sotolf
Also, me saying that I can see use cases for LLMs does not equate me endorsing the way LLMs are built today. Plagiarism by LLMs, sites being overwhelmed by unethical bots, and more, are all real things that expose the awfulness of today's generative AI boom. But just saying 'all machine learning is bad because look at these bad things' is throwing out the baby with the bath water IMO.
@justin @paul @rl_dane

@justin
Instead of all this awfulness, I could imagine an LLM being trained on the downloads from dumps.wikimedia.org and it then being released under a CC-SA license, for instance.
@paul @rl_dane @sotolf

@xarvos @paul @wouter @rl_dane @justin Yeah, maybe people in general are less worried than what I am :p

@wouter @paul @rl_dane @justin

Why an LLM and not a real search? I've helped finding people's half way remembered conversations in email quite often, as I work in IT, it's just about knowing how to query the Email program.

@wouter @paul @rl_dane @justin

I never said machine learning is bad, don't put fucking words in my mouth... I said LLMs are bad, LLMs are not the only case of machine learning, generative models in general are shit at their tasks because they are created to generate bullshit, if you train a machine learning algorithm on your specific task you end up with a way smaller model that is actually useful. So don't assume I don't have a clue what I talk about, if you just don't think everyone you talk to is a fucking moron, maybe we could actually have a conversation.

@wouter @paul @rl_dane @justin

An LLM is good at generating plausible text, nothing else.

@sotolf @wouter @paul @justin

I wonder if instead of slathering enormous language models on top of every little thing, applying small language models in an intelligent and targeted manner where they're actually helpful won't actually be useful.

Something like calorie counting: using a small language model to parse a sentence describing what a person ate in a day and finding the requisite parts within a nutrition database, rather than using a LLM for that and just hoping that it doesn't hallucinate something.

@rl_dane @wouter @paul @justin

Why use a language model when what you're trying to achieve is not a language task?

Language is a means of communicating between people.

And for calorie counting, if a person said they ate bread, should it take the calories from white bread or whole grain sunflower bread, how many slices did they eat?

Personally I think the only thing that would work well for that is basically scan the EAN-barcode on the product, have a database over the nutrient, and let the user input amount.

With an LLM you just will get random bullshit out of it, this is a task of logging precise data, not generating text, and what is an LLM? A... Large Language model.

Using an LLM for non-language tasks is like eating soup with a fork... Where did right tool for the right job go?

@sotolf @xarvos @paul @wouter @justin

People who install security cameras INSIDE their house really freak me out.

"You have a camera INSIDE your house, watching your CHILDREN, and you're sending that live feed across the internet to some stupid corporation without encrypting it first? Have you been tested for lead poisoning?"

@sotolf @wouter @paul @justin

But parsing is a language task.

@rl_dane @wouter @paul

This is a task that doesn't need parsing, apart from parsing a code128 barcode, which is optimised for computers, and computer reading.

@rl_dane @wouter @paul You can't just YOLO calorie measurements if you really want to achieve something with it, like reducing weight, accuracy is important.

@sotolf @wouter @paul @justin

That still requires someone to install an app, open it, and scan a barcode for every ingredient used.
That obviously provides the most accurate results, but sometimes accuracy isn't the goal.

Being able to say, "Oh yeah, I had this and this for lunch today" and just writing down a natural language query and having it well-understood and being given a useful approximate value is actually very valuable for someone wanting to start calorie counting, because most people don't stick with the exhaustive manual lookup of every single ingredient for long.

@sotolf @wouter @paul @justin

I disagree, having an approximate value can still be very informative, and makes calorie counting much easier if the person doesn't have to stop and scan each and every item.

Anything that make the process less annoying (especially at the beginning) can be very helpful. Dieting is hard enough as it is.

@sotolf
No need to swear. I thought we were having a friendly conversation, but apparently not 🤷

Bye
@paul @rl_dane @justin
replies
1
announces
0
likes
1

@rl_dane @wouter @paul

How are you going to count calories with utter random data? If you're not giving it the data you're not counting calories, I tried winging it for a while, and gained weight, until I started actually counting the real data painstakingly and manual (adding up the calories from the packaging) and then it started actually working, winging calorie counting is as good as just not counting them in the first place, it's like "learning" language with duolingo, it will get you nowhere, you'll just feel good about pretending doing something.

@rl_dane @wouter @paul

How much is one portion, how does the LLM estimate how much dressing you put on your salad, or that the "healthy lunch" I had was white bread with a schnitzel lathered in mayonaisse?

@wouter @paul @rl_dane @justin

Yes, until you started putting words into my mouth to create a strawman we were, when you're being dishonest I will be annoyed at you.

@sotolf @rl_dane @wouter @paul please don't tag me in the rest of the thread.

@justin @rl_dane @wouter @paul I'll try to delete your tag, sorry, you got sucked into one of our hellthreads again :/

@rl_dane @wouter @paul I edited out the guy that wanted to be left out of the conversation out of my last posts, hopefully that will make it easier to not drag them in again :p

@mirabilos @cmccullough @sotolf @justin @paul

Probably so. They used to have summaries like that before the LLM "features," so I'm wondering where they might have come from.

@pixx @sotolf @wouter @paul @justin

Yes, but there is a case for "do something badly where you otherwise wouldn't do it at all."

@rl_dane @pixx @sotolf @wouter which leads on to "just because you can do something, doesn't mean you should" 😁

So many replies in this thread, but just to summarise - I dislike LLMs, do not believe there is any good use for them, with respect I cannot take examples given as "good use" seriously, and generally wish LLMs and genAI get sent into the sun and never be spoken about again.