I think it’s lemmy users. I see a lot more LLM skepticism here than in the news feeds.
In my experience, LLMs are like the laziest, shittiest know-nothing bozo forced to complete a task with zero attention to detail and zero care about whether it’s crap, just doing enough to sound convincing.
Verify every single bloody line of output. Top three to five are good, then it starts guessing the rest based on the pattern so far. If I wanted to make shit up randomly, I would do it myself.
People who trust LLMs to tell them things that are right rather than things that sound right have fundamentally misunderstood what an LLM is and how it works.
It’s not that bad, the output isn’t random.
Time to time, it can produce novel stuffs like new equations for engineering.
Also, verification does not take that much effort. At least according to my colleagues, it is great.
Also works well for coding well-known stuffs, as well!
😆 I can’t believe how absolutely silly a lot of you sound with this.
LLM is a tool. It’s output is dependent on the input. If that’s the quality of answer you’re getting, then it’s a user error. I guarantee you that LLM answers for many problems are definitely adequate.
It’s like if a carpenter said the cabinets turned out shit because his hammer only produces crap.
Also another person commented that seen the pattern you also see means we’re psychotic.
All I’m trying to suggest is Lemmy is getting seriously manipulated by the media attitude towards LLMs and these comments I feel really highlight that.
I think it’s lemmy users. I see a lot more LLM skepticism here than in the news feeds.
In my experience, LLMs are like the laziest, shittiest know-nothing bozo forced to complete a task with zero attention to detail and zero care about whether it’s crap, just doing enough to sound convincing.
Wdym, I have seen researchers using it to aid their research significantly. You just need to verify some stuff it says.
Verify every single bloody line of output. Top three to five are good, then it starts guessing the rest based on the pattern so far. If I wanted to make shit up randomly, I would do it myself.
People who trust LLMs to tell them things that are right rather than things that sound right have fundamentally misunderstood what an LLM is and how it works.
It’s not that bad, the output isn’t random. Time to time, it can produce novel stuffs like new equations for engineering. Also, verification does not take that much effort. At least according to my colleagues, it is great. Also works well for coding well-known stuffs, as well!
😆 I can’t believe how absolutely silly a lot of you sound with this.
LLM is a tool. It’s output is dependent on the input. If that’s the quality of answer you’re getting, then it’s a user error. I guarantee you that LLM answers for many problems are definitely adequate.
It’s like if a carpenter said the cabinets turned out shit because his hammer only produces crap.
Also another person commented that seen the pattern you also see means we’re psychotic.
All I’m trying to suggest is Lemmy is getting seriously manipulated by the media attitude towards LLMs and these comments I feel really highlight that.
No, I know the data I gave it and I know how hard I tried to get it to use it truthfully.
You have an irrational and wildly inaccurate belief in the infallibility of LLMs.
You’re also denying the evidence of my own experience. What on earth made you think I would believe you over what I saw with my own eyes?