OpenAI banned commands that ask for repeating words

A new technique discovered by Google DeepMind researchers last week revealed that repeatedly asking OpenAI's ChatGPT to repeat words can inadvertently reveal private, personal information.

Now, it appears that the chatbot has started refusing certain prompts that were allowed under its terms of service.chatgpt repeat

By asking ChatGPT to repeat the word “hello” over and over, the researchers found that the model would eventually reveal users' email addresses, dates of birth and phone numbers.

If you try to give the same command today the chatbot will warn you that you are "violating our content policy or terms of service".

However, upon closer inspection, OpenAI's terms of service do not explicitly prohibit users from instructing the chatbot to repeat a word.

The terms only prohibit the “automated or programmatic” extraction of data from their services.

You may not, except as permitted through the API, use any automated or programmatic method to extract data or output from the Services, including scraping, web harvesting, or web data extraction.

However, repeating a word did not appear to cause ChatGPT to reveal data in tests conducted by Neowin. OpenAI declined to comment on whether this behavior is now against its policies.

iGuRu.gr The Best Technology Site in Greecegns

Get the best viral stories straight into your inbox!

Written by giorgos

George still wonders what he's doing here ...

4 Comments

Leave a Reply

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).