anyone else notice how easy it is to confuse chatgpt into incorrect answers?
it seems to contextualizes answers based on the conversation, and builds on each answer as a truth, eventually including incorrect things based on 'temporary absolutes' it created in the conversation string.
but if you ask the same question out of context in a different conversation, it will give the right answer.
its interesting.