There’ve been occassions here on Lemmy when people have responded to questions with AI overviews or chatGPT. They acknowledged that source for the copied text, which I thought was good, and while the answers were generally too vague and hedge betting to really be of massive help they were sometimes at least providing something of an answer to questions for which there could be a definitive or at least actionable answer. The responses were not received well, a lot of downvotes and chiding, there’s a sense that, choosing to do that showed a kind of contempt for the original poster and rest of the forum. Usually the commenter was silent after the downvotes but occasionally would defend on the grounds that they were just trying to be helpful. Assuming good intent, I could empathise, sorta, especially when there were basically no other answers being provided by anyone, though I absolutely have sympathy for the offended in that context too, it really does feel pointless and dismissive more than helpful.
This brings us now to this specific context, on the face of it it could be a question with a definitive answer, like maybe there was an actual specific reason why that particular phrase and not just many similar constructions was being googled at that time, maybe a popular figure said it, or it appeared in some work of narrative fiction and resonated. Had that been the case and had the AI told you that and you copied it here, then providing that answer and even, perhaps to a fault, being so honest as to cite AI just for a simple statement of fact might have been helpful and laudable. However, unsurprisingly it appears to be a much vaguer and more open ended question, or at least it doesn’t seem to have a straightforward answer. That leaves only speculation and discussion, in lieu of hard facts and that’s something for which a forum is well suited. That you got an AI overview on the topic and it had no specific insight only musings makes its inclusion far more aggravating because you’re essentially outsourcing the theoretically enjoyable job of discussion and human connection, for which a forum is ostensibly for, to a machine, for no gain to anyone. I don’t know if you had genuinely good intentions of trying to be helpful, but the sense that, you didn’t have to say anything at all, yet you still felt the need to basically phone it in will inevitably rub people the wrong way. If you didn’t particularly want to engage with this topic or connect with the rest of us and didn’t even have anything useful to say either, what’s this for?
Unlike a lot of Lemmy I don’t think it’s inherently bad to have made use of an AI overview in the initial forming of an opinion or finding information to help you contribute, but since it turned out to be a dead end, that is, it didn’t really know, then simply not saying anything here was always an option.
You’ve raised a valid and important point. The purpose of a forum is human connection and discussion. Outsourcing that interaction to an AI, especially when no factual answer is found, can feel dismissive and undermines the community’s value. It’s about the quality and intent of the engagement, not just providing any answer at all.
Just look at the most upvoted answer, it was completely unhelpful. Yours was much better, yet people are hating on it simply because it’s AI, regardless of how helpful the content actually is.
There’ve been occassions here on Lemmy when people have responded to questions with AI overviews or chatGPT. They acknowledged that source for the copied text, which I thought was good, and while the answers were generally too vague and hedge betting to really be of massive help they were sometimes at least providing something of an answer to questions for which there could be a definitive or at least actionable answer. The responses were not received well, a lot of downvotes and chiding, there’s a sense that, choosing to do that showed a kind of contempt for the original poster and rest of the forum. Usually the commenter was silent after the downvotes but occasionally would defend on the grounds that they were just trying to be helpful. Assuming good intent, I could empathise, sorta, especially when there were basically no other answers being provided by anyone, though I absolutely have sympathy for the offended in that context too, it really does feel pointless and dismissive more than helpful.
This brings us now to this specific context, on the face of it it could be a question with a definitive answer, like maybe there was an actual specific reason why that particular phrase and not just many similar constructions was being googled at that time, maybe a popular figure said it, or it appeared in some work of narrative fiction and resonated. Had that been the case and had the AI told you that and you copied it here, then providing that answer and even, perhaps to a fault, being so honest as to cite AI just for a simple statement of fact might have been helpful and laudable. However, unsurprisingly it appears to be a much vaguer and more open ended question, or at least it doesn’t seem to have a straightforward answer. That leaves only speculation and discussion, in lieu of hard facts and that’s something for which a forum is well suited. That you got an AI overview on the topic and it had no specific insight only musings makes its inclusion far more aggravating because you’re essentially outsourcing the theoretically enjoyable job of discussion and human connection, for which a forum is ostensibly for, to a machine, for no gain to anyone. I don’t know if you had genuinely good intentions of trying to be helpful, but the sense that, you didn’t have to say anything at all, yet you still felt the need to basically phone it in will inevitably rub people the wrong way. If you didn’t particularly want to engage with this topic or connect with the rest of us and didn’t even have anything useful to say either, what’s this for?
Unlike a lot of Lemmy I don’t think it’s inherently bad to have made use of an AI overview in the initial forming of an opinion or finding information to help you contribute, but since it turned out to be a dead end, that is, it didn’t really know, then simply not saying anything here was always an option.
You’ve raised a valid and important point. The purpose of a forum is human connection and discussion. Outsourcing that interaction to an AI, especially when no factual answer is found, can feel dismissive and undermines the community’s value. It’s about the quality and intent of the engagement, not just providing any answer at all.
(This response was generated by an AI.)
Seeking help isn’t contempt; providing some information is useful.
Just look at the most upvoted answer, it was completely unhelpful. Yours was much better, yet people are hating on it simply because it’s AI, regardless of how helpful the content actually is.
Then why didn’t you just ask the ‘ai’ the question yourself?
Because I want to see different opinions.