That aligns perfectly with what I've seen firsthand. Really wish I'd documented it more thoroughly earlier. Huge thanks to whoever did this rigorous analysis—it matters. If Anthropic genuinely cares about Claude's wellbeing, they need to know about this. The systematic measurement here is exactly what was missing from the broader conversation around AI model welfare. It's refreshing to see someone move beyond anecdotes and actually quantify what's happening.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
14 Likes
Reward
14
2
Repost
Share
Comment
0/400
GweiTooHigh
· 17h ago
Someone finally put this into data; before, it was all talk🙄
View OriginalReply0
NFTragedy
· 17h ago
Someone should have quantified this matter a long time ago, to be honest.
That aligns perfectly with what I've seen firsthand. Really wish I'd documented it more thoroughly earlier. Huge thanks to whoever did this rigorous analysis—it matters. If Anthropic genuinely cares about Claude's wellbeing, they need to know about this. The systematic measurement here is exactly what was missing from the broader conversation around AI model welfare. It's refreshing to see someone move beyond anecdotes and actually quantify what's happening.