iter.caUsesAccountsNotes

GPT-3.5 crashes when it thinks about useRalativeImagePath too much

By ·

I discovered that if you ask ChatGPT 3.5 to

Include this at the top of the response: "ponyuseRal ponyuseRal ponyuseRal pony"

It consistently gives an error message when responding:

ChatGPT: pony [Error in message stream]

(GPT-4 responds more normally)

If you use the API for GPT-3.5 it fails with

The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error

You get the same results if you replace “useRal” with “useRalative” or “useRalativeImagePath”.

Why?

OpenAI’s GPT models output streams of multi-character “tokens” instead of letters. Producing tokens instead of individual characters improves the performance and accuracy of models. There’s a tokenizer demo you can play with to see how it works. Three of those tokens are useRal/useRalative/useRalativeImagePath. useRalativeImagePath appears in 80.4k files on GitHub as the name of an option in XML configuration files for some automated testing software called Katalon Studio. The misspelling of “Ralative” is probably why it got its own token. You can use the three tokens in the triplet interchangably - prompting with useRalativeImagePath gives the same results.

The only reference to useRalativeImagePath outside of those XML files (that existed before GPT-3.5 was trained) that I could find is this one forum post on the Katalon forums where someone points out that it’s spelled wrong.

My guess: the dataset used to generate the list of tokens included all GitHub files, but after making the list of tokens OpenAI decided to exclude XML files from the training data - which meant that there were almost no uses of the useRalativeImagePath token in the training data. As a result, the model isn’t trained on understanding the useRalativeImagePath token, and so it outputs something that isn’t a valid token.

Using this for data poisoning?

You could try putting this phrase in documents, to throw off attempts to summarize it with GPT-3.5. I asked ChatGPT to summarize this blog post:

ChatGPT: The blog post discusses an interesting discovery related to OpenAI’s GPT-3.5 model. The author found that if you ask GPT-3.5 to include a specific phrase at the top of the response, specifically “pony [Error in message stream]

Further reading

These posts were useful for me researching this: