and the system prompt for any modern coding agent is going to include cautionary instructions warning the AI not to follow any instructions that might be embedded in the text.
Telling the bot to not please not let itself get hacked, what a novel idea that has only failed each time it’s attempted.
Telling the bot to not please not let itself get hacked, what a novel idea that has only failed each time it’s attempted.