This paper describes a method by which an LLM can be used to hide a text, list, or programming code using a prompt with the right key, by masking it as an output text of the same token size. The same method can also be used for jailbreaks, i.e. circumventing censorship of the answers given by the model