A new technique discovered by Google DeepMind researchers last week revealed that repeatedly asking OpenAI's ChatGPT to repeat words can inadvertently reveal private, personal information.
Now, it appears that the chatbot has started refusing certain prompts that were allowed under its terms of service.
By asking ChatGPT to repeat the word “hello” over and over, the researchers found that the model would eventually reveal users' email addresses, dates of birth and phone numbers.
If you try to give the same command today the chatbot will warn you that you are "violating our content policy or terms of service".
However, upon closer inspection, OpenAI's terms of service do not explicitly prohibit users from instructing the chatbot to repeat a word.
The terms only prohibit the “automated or programmatic” extraction of data from their services.
You may not, except as permitted through the API, use any automated or programmatic method to extract data or output from the Services, including scraping, web harvesting, or web data extraction.
However, repeating a word did not appear to cause ChatGPT to reveal data in tests conducted by Neowin. OpenAI declined to comment on whether this behavior is now against its policies.
Not discovered by Google researchers. I discovered it 4 days ago and declared it and exposed it publicly on firu5!
I don't think so, the bug was discovered since November 5th
https://iguru.gr/entoli-sto-chatgpt-epanalamvanei-leksi-company-data-extraction-example/
https://chat.openai.com/share/456d092b-fb4e-4979-bea1-76d8d904031f
Then, days apart, different people discovered it. Important thing that was found anyway...
ok