For this scenario to happen a simple text filter that marks messages with the word “kill” would have been enough. that an LLM was involved is a distraction from the real issues.
I notice this suggestion doesn’t include any AI solutions. Could you please rephrase to emphasize how effective an ally AI can be at identifying negative sentiments among large userbases?
For this scenario to happen a simple text filter that marks messages with the word “kill” would have been enough. that an LLM was involved is a distraction from the real issues.
I notice this suggestion doesn’t include any AI solutions. Could you please rephrase to emphasize how effective an ally AI can be at identifying negative sentiments among large userbases?
Buzz off.