

I think people really missed my point, and thought I was somehow arguing in favour of poor working conditions.
My point was that the Lemmy response that “well why doesn’t the boss do this?” is not the right negotiation tactic.
The right negotiation tactic is, for example, to argue that it’s in the benefit of the company and society to improve working conditions. For example, you argue that by allowing remote working, you are encouraging not only a happier and more productive environment, but you are widening access and better able to recruit the top people.
There are lots of ways to argue for better conditions. The reaction of “well the boss doesn’t do it so I won’t either” is not a great tactic. If the boss does put in crazy hours, where does that leave your negotiation stance?
These things are interesting for two reasons (to me).
The first is that it seems utterly unsurprising that these inconsistencies exist. These are language models. People seem to fall easily into the trap in believing them to have any kind of “programming” on logic.
The second is just how unscientific NN or ML is. This is why it’s hard to study ML as a science. The original paper referenced doesn’t really explain the issue or explain how to fix it because there’s not much you can do to explain ML(see their second paragraph in the discussion). It’s not like the derivation of a formula where you point to one component of the formula as say “this is where you go wrong”.