In the same sense I’d describe Othello-GPT’s internal world model of the board as ‘board’, yes.
Also, “top of mind” is a common idiom and I guess I didn’t feel the need to be overly pedantic about it, especially given the last year and a half of research around model capabilities for introspection of control vectors, coherence in self modeling, etc.
How are we meant to have these conversations if people keep complaining about the personification of LLMs without offering alternative phrasing? Showing up and complaining without offering a solution is just that, complaining. Do something about it. What do YOU think we should call the active context a model has access to without personifying it or overtechnicalizing the phrasing and rendering it useless to laymen, @[email protected]?
Well, since you asked I’d basically do what you said. Something like “so ‘humans might hate hearing from me’ probably wasn’t part of the context it was using."
Let’s be generous for a moment and assume good intent, how else would you describe the situation where the llm doesn’t consider a negative response to its actions due to its training and context being limited?
Sure it gives the llm a more human like persona, but so far I’ve yet to read a better way to describing its behaviour, it is designed to emulate human behavior so using human descriptors helps convey the intent.
I think you did a fine job right there explaining it without personifying it. You also captured the nuance without implying the machine could apply empathy, reasoning, or be held accountable the same way a human could.
There’s value in brevity and clarity, I took two paragraphs and the other was two words. I don’t like it either, but it does seem to be the way most people talk.
I assumed you would understand I meant the short part of your statement describing the LLM. Not your slight dig at me, your setting up the question, and your clarification on your perspective.
So you be more clear, I meant “The IIm doesn’t consider a negative response to its actions due to its training and context being limited”
In fact, what you said is not much different from the statement in question. And you could argue on top of being more brief, if you remove “top of mind” it’s actually more clear. Implying training and prompt context instead of the bot understanding and being mindful of the context it was operating in.
Mind?
In the same sense I’d describe Othello-GPT’s internal world model of the board as ‘board’, yes.
Also, “top of mind” is a common idiom and I guess I didn’t feel the need to be overly pedantic about it, especially given the last year and a half of research around model capabilities for introspection of control vectors, coherence in self modeling, etc.
Yes. The person (s) who set the llm/ai up.
How are we meant to have these conversations if people keep complaining about the personification of LLMs without offering alternative phrasing? Showing up and complaining without offering a solution is just that, complaining. Do something about it. What do YOU think we should call the active context a model has access to without personifying it or overtechnicalizing the phrasing and rendering it useless to laymen, @[email protected]?
Well, since you asked I’d basically do what you said. Something like “so ‘humans might hate hearing from me’ probably wasn’t part of the context it was using."
Let’s be generous for a moment and assume good intent, how else would you describe the situation where the llm doesn’t consider a negative response to its actions due to its training and context being limited?
Sure it gives the llm a more human like persona, but so far I’ve yet to read a better way to describing its behaviour, it is designed to emulate human behavior so using human descriptors helps convey the intent.
I think you did a fine job right there explaining it without personifying it. You also captured the nuance without implying the machine could apply empathy, reasoning, or be held accountable the same way a human could.
There’s value in brevity and clarity, I took two paragraphs and the other was two words. I don’t like it either, but it does seem to be the way most people talk.
I assumed you would understand I meant the short part of your statement describing the LLM. Not your slight dig at me, your setting up the question, and your clarification on your perspective.
So you be more clear, I meant “The IIm doesn’t consider a negative response to its actions due to its training and context being limited”
In fact, what you said is not much different from the statement in question. And you could argue on top of being more brief, if you remove “top of mind” it’s actually more clear. Implying training and prompt context instead of the bot understanding and being mindful of the context it was operating in.
Assuming any sort of intent at all is the mistake.