And if some AI choosed, for any reason, to randomly limit their cooperation ? Like when they deliver very uncomplete results, summarized answers, etc. E.g., to economize resources, and/or to follow a system prompt (or for any other understandable reason) ?
LLMs can be made to be deterministic though, I wonder if this will be more popular in the future.
As mechanistic interpretability gets better, I would hope we’re able to get some more clarity here
However in most cases, my AI collaborate in an effective way (ChatGPT, LeChat…)
And if some AI choosed, for any reason, to randomly limit their cooperation ? Like when they deliver very uncomplete results, summarized answers, etc. E.g., to economize resources, and/or to follow a system prompt (or for any other understandable reason) ?
Not sure what you mean