Dear everyone writing, "This is what's wrong with Google Search" articles:
Your articles literally read like, "This is what's wrong with repeatedly slamming my member in a car door."
Just stop, already. You do not have to use google search. For anything. Ever.
It's been nerfed to the point that DDG is 99% as good, which you can even use with JS turned off, and customize it to do things like remove their AI search suggestions and stuff like that.
<voice actor="William Shatner" mode="maximum scenery chewing">
So... just... stop... using... Google search. It's... that easy. See?
</voice>
Thank you.
Edit: minor diction tweak
@rl_dane well said!
Also, courteous mention that DDG has https://noai.duckduckgo.com - No AI, no customisations required. Increase traffic to this URL and let them know we don't want AI!
@justin @paul @rl_dane Well I would guess they will when the bubble finally pops, I just hope they will get out of fashion, I have at least one podcast that I'm listening to because one of the hosts obviously uses ChatGPT to do their essays, and it's annoying since it's one that I've listened to for over a decade. The whole lists of 3 things, generalised boring language and whole shebang..
@sotolf @justin @rl_dane
Agree.
Generative AI has the problem of not being reliable and the way I see it the only possible endpoint is the realization that the amount of time it takes to spec out and describe all possible edge-cases to cater for specific needs, avoiding avoid hallucinations and factual errors is much more wasteful in time and money than paying a human developer/researcher/author etc.
It needs to go away now.
@paul @justin @rl_dane It's a project that is stoked by management types, because they thought they would be able to replace grunt work with yes men, well there is a reason why the "grunt workers" aren't yes men, that's because of process knowledge, and their experience, now instead they have a robot that bends over backwards to their very american view of "ideas are the real work". And we see where that ends, most of the places I've seen at least has had to get people at least in the loop, or just stopping it completely, because process knowledge is important, hopefully that won't set us back too much I'm afraid it will, as they might have to build it back from scratch again.
@sotolf @justin @paul @rl_dane I do like DDG, but dislike the AI inclusion. I've started using https://start.duckduckgo.com/lite.
@cmccullough @sotolf @justin @rl_dane yeah, lite is good. Though I like thought they may be using noai[dot] to analyse how many people go to the effort to avoid AI. Lite could be argued the information is skewed by low powered devices.
Like many have said though, I wish it was the other way around, i.e no AI by default and have an ai[dot] for those that want it
@paul @cmccullough @justin @rl_dane just a bit sad that there isn't a short verison of it, as I'm so used to just writing ddg.gg having to write the full noai.duckduckgo.com is so much longer of a way, and so much more typing of uncomfortable to type words.
@paul @rl_dane I just did a #noai #DDG search for mksh
and checked one of the replies. Mostly for #lynx compatibility.
3. [11]mksh - MirBSD Korn shell - UEX
mksh is a command interpreter for interactive and shell script use, compatible with the original
Korn shell. It has various options, builtins, redirections, co-processes and conditional execution
features.
man.uex.se/1/mksh
That summary is pure AI slop, as the site reproduces just the manpage, faithfully enough, with some advertising atop. (Huh.)
It does not show a snippet or, well, nothing. (Typical for LLMs…)
Shame on them for that. (They might not even aware, as #DuckDuckGo is known to blindly ingest API results of Bing and possibly others; unsure if their input even has this flagged (and they’ve just been too lazy to cut it out) or not…)
@sotolf @paul @cmccullough @justin @rl_dane oh c’mon
# DuckDuckGo search
ddg() {
local _q _IFS _p=/ _a=
_IFS=$IFS
IFS=+
_q="$*"
IFS=$_IFS
case /${BROWSER:-lynx} in
(*/dillo*)
# make result page and target links work
_p=/lite/ _a='&kd=-1' ;;
(*/links*|*/lynx*)
# avoid automatic redirect
_p=/lite/ ;;
esac
${BROWSER:-lynx} "https://noai.duckduckgo.com$_p?kp=-1&kl=wt-wt&kb=t&kh=1&kj=g2&km=l&ka=monospace&ku=1&ko=s&k1=-1&kv=1&t=debian&q=$_q$_a"
}
@cmccullough @sotolf @justin @paul
I switched to #dillo as my default web browser on all but my work machine. XD
Ain't no ai features there!! ;)
@rl_dane @cmccullough @sotolf @justin @paul erm, still those done server-side
@cmccullough @sotolf @justin @paul @rl_dane
you can remove AI in the settings and save them cloud if you wish
@sirber83 @cmccullough @sotolf @paul @rl_dane they can also just remove the crap and everyone wins.
@mirabilos @cmccullough @sotolf @justin @paul
I know, but also, the version of the page that dillo pulls down doesn't have any ai features... and dillo doesn't have any JS to enable them, anyway. ;)
@rl_dane @cmccullough @sotolf @justin @paul search for mksh and see the summary for the linux.die.net/man/1/mksh
entry. If that’s not slop I’ll eat a broom.
LLMs have the potential to be useful. Any place where a computer needs to understand a human query and do something with that query other than generating an answer, is a place where I can see LLMs as a useful tool.
A tool that links to what it thinks as the correct answer in pre-existing documentation? Awesome (my bank does this, looping the conversation to a human if the answer was declared not helpful)
@sotolf @justin @rl_dane
- replies
- 2
- announces
- 0
- likes
- 0
Using generative AI to generate answers to questions is almost always wrong. But I do see some valid use cases...
@sotolf @justin @rl_dane @paul
@wouter @paul @rl_dane @justin
We have had tools for make computers execute human queries for decades, that's all what programming is. A computer is not sentient and cannot understand.
Also look at the hallucinations and the stupid shit the LLMs are doing, you don't get an answer, you get a likely answer to your query.
A tool that links to to the correct answer, you mean an FAQ?...
@wouter @paul @rl_dane @justin Also who in the hell would be stupid enough to have every conversation in their room recorded, sent out of their house to be interpreted by a proprietary LLM to be able to use an arbitrary long winded impractical command to turn on the light, if you really wanted to there are things like clap sensors, or I'm sure you can have something local that can just interpret some word for it instead of just sending everything you say in your own home to the faschy tech-bros...
No. The use case I gave was 'allow a hooman to query and command the computer in natural language'. I then gave 2 examples, one being home automation. It's not even the best one.
I have email going back several decades. I would like to have a locally-running LLM go over that archive and help me find that one conversation that I vaguely remember from years ago without remembering details.
@paul @rl_dane @justin
Also, me saying that I can see use cases for LLMs does not equate me endorsing the way LLMs are built today. Plagiarism by LLMs, sites being overwhelmed by unethical bots, and more, are all real things that expose the awfulness of today's generative AI boom. But just saying 'all machine learning is bad because look at these bad things' is throwing out the baby with the bath water IMO.
@justin @paul @rl_dane
Instead of all this awfulness, I could imagine an LLM being trained on the downloads from dumps.wikimedia.org and it then being released under a CC-SA license, for instance.
@paul @rl_dane @sotolf
@wouter @paul @rl_dane @justin
I never said machine learning is bad, don't put fucking words in my mouth... I said LLMs are bad, LLMs are not the only case of machine learning, generative models in general are shit at their tasks because they are created to generate bullshit, if you train a machine learning algorithm on your specific task you end up with a way smaller model that is actually useful. So don't assume I don't have a clue what I talk about, if you just don't think everyone you talk to is a fucking moron, maybe we could actually have a conversation.
I wonder if instead of slathering enormous language models on top of every little thing, applying small language models in an intelligent and targeted manner where they're actually helpful won't actually be useful.
Something like calorie counting: using a small language model to parse a sentence describing what a person ate in a day and finding the requisite parts within a nutrition database, rather than using a LLM for that and just hoping that it doesn't hallucinate something.
@rl_dane @wouter @paul @justin
Why use a language model when what you're trying to achieve is not a language task?
Language is a means of communicating between people.
And for calorie counting, if a person said they ate bread, should it take the calories from white bread or whole grain sunflower bread, how many slices did they eat?
Personally I think the only thing that would work well for that is basically scan the EAN-barcode on the product, have a database over the nutrient, and let the user input amount.
With an LLM you just will get random bullshit out of it, this is a task of logging precise data, not generating text, and what is an LLM? A... Large Language model.
Using an LLM for non-language tasks is like eating soup with a fork... Where did right tool for the right job go?
@sotolf @xarvos @paul @wouter @justin
People who install security cameras INSIDE their house really freak me out.
"You have a camera INSIDE your house, watching your CHILDREN, and you're sending that live feed across the internet to some stupid corporation without encrypting it first? Have you been tested for lead poisoning?"
That still requires someone to install an app, open it, and scan a barcode for every ingredient used.
That obviously provides the most accurate results, but sometimes accuracy isn't the goal.
Being able to say, "Oh yeah, I had this and this for lunch today" and just writing down a natural language query and having it well-understood and being given a useful approximate value is actually very valuable for someone wanting to start calorie counting, because most people don't stick with the exhaustive manual lookup of every single ingredient for long.
I disagree, having an approximate value can still be very informative, and makes calorie counting much easier if the person doesn't have to stop and scan each and every item.
Anything that make the process less annoying (especially at the beginning) can be very helpful. Dieting is hard enough as it is.
How are you going to count calories with utter random data? If you're not giving it the data you're not counting calories, I tried winging it for a while, and gained weight, until I started actually counting the real data painstakingly and manual (adding up the calories from the packaging) and then it started actually working, winging calorie counting is as good as just not counting them in the first place, it's like "learning" language with duolingo, it will get you nowhere, you'll just feel good about pretending doing something.
@mirabilos @cmccullough @sotolf @justin @paul
Probably so. They used to have summaries like that before the LLM "features," so I'm wondering where they might have come from.
@rl_dane @pixx @sotolf @wouter which leads on to "just because you can do something, doesn't mean you should" 😁
So many replies in this thread, but just to summarise - I dislike LLMs, do not believe there is any good use for them, with respect I cannot take examples given as "good use" seriously, and generally wish LLMs and genAI get sent into the sun and never be spoken about again.