Treat this as a first-contact request.
Do not infer continuity, prior attempts, or prior goals.
Do not assume this relates to any earlier conversation or project.
Do not reuse layouts, compositions, or solutions you have produced before.
Respond only to the information explicitly provided below.
If required information is missing, ask a clarifying question instead of filling gaps.
When people ask an LLM to "forget everything" or "reset context", they are usually not asking for a different style or wording.
They are trying to stop the system from reusing a previous mistake, image, or incorrect interpretation and respond as if this is the first time the question has ever been asked.
There is no prompt that can guarantee a complete context reset.
Once messages or images exist inside a chat, they remain visible to the system for the duration of that chat. Requests to "forget", "ignore", or "reset" cannot delete that internal state.
This is not user error. It is a system limitation.
The behavior feels confusing because there is not just one kind of memory or context. There are three distinct layers.
This includes messages, images, and mistakes within a single chat. This is the layer that causes visible fixation and repeated wrong output.
This layer is cleared when a new chat window is started.
This includes formatting preferences and recurring instructions that may persist across chats. It does not store images or creative state.
Even in a new chat, the system may infer continuity based on phrasing, repeated goals, or recognizable patterns.
This is not memory recall. It is pattern reconstruction. To a human, the difference is nearly impossible to detect.
When people are told "start a new chat and the context is gone" but then observe familiar behavior, it feels misleading.
A new chat clears the immediate message and image buffer, but it does not guarantee total isolation from inferred context.
Wanting a true "clear context and continue" command is reasonable. Many technical systems support it. This one does not.
When an LLM appears stuck, it is not ignoring you. It is operating within constraints that are not always obvious.