ChatGPT has taken the world by storm. Within two months of its launch it reached 100 million active users, making it the fastest growing app ever launched.
Users are attracted by the advanced features of the tool but there are also many who are concerned about its potential to cause disruption in various sectors.
What didn't the creators of ChatGPT think of, or did they think of it and are just waiting for it to explode, collecting billions of dollars in the meantime?
One implication that hasn't been discussed much is the privacy risks that ChatGPT poses to each of us. Just yesterday, Google unveiled its own AI bot called Bard, and other tech companies are sure to follow suit.
The problem is that it is powered by our personal data.
ChatGPT is powered by a language model that requires massive amounts of data to operate and improve. The more data the model has, the better it trains and gets better at spotting patterns, predicting what's next, and generating relevant text.
OpenAI, the company behind ChatGPT, fed the tool with about 300 billion words systematically “scraped” from the Internet: from books, articles, websites and posts — including personal information obtained without any consent.
If you've ever blogged, done a product review, or commented on an article online, there's a good chance that that information was consumed by ChatGPT.
But the data collection used to train ChatGPT is problematic for several reasons.
First, none of us were asked if OpenAI could use our data. This is a clear violation of privacy, especially when the data is sensitive and can be used to identify us, our family members or our location.
Even when data is publicly available, its use can violate what we call contextual integrity. This is a fundamental principle in legal privacy debates. It requires that individuals' information is not disclosed outside the context in which it was originally produced.
Also, OpenAI does not offer processes to help interested parties check whether the company has stored their personal information and request its deletion. This is a guaranteed right under the European General Data Protection Regulation (GDPR). Please note that we do not yet know if ChatGPT complies with GDPR requirements.
This "right to oblivion” is especially important in cases where the information is inaccurate or misleading, which seems to be a common occurrence with ChatGPT.
Additionally, the data on which ChatGPT was trained may be proprietary or copyrighted. For example, if you look up Peter Carey's “True History of the Kelly Gang”. the tool will return you the first few paragraphs of the novel, a copyrighted text.
ChatGPT does not take copyright protection into account when generating the results. Anyone using their results somewhere else could be unwittingly plagiarizing.
Finally, OpenAI did not pay for the data it copied from the Internet. The individuals, website owners and companies that produced it were not compensated. This is particularly noteworthy given that OpenAI was recently valued at US$29 billion, more than doubling its value in 2021.
OpenAI also announced ChatGPT Plus, a subscription plan that will offer customers continuous access to the tool, faster response times, and priority access to new features. This plan will contribute to an expected revenue of $1 billion by 2024.
pws doyleuei?
https://iguru.gr/10-ergaleia-gia-chrisimopoiisete-chatgpt/
The insurance companies, as well as all sorts of bastards around the world, will have a party of their own joy, after such ... blunders.
Of course, I must not forget to mention that - in some areas - the "distinct lines between the entities, are indiscernible or even non-existent when there are insurance companies in the USA that refused to insure their prospective clients because (as they claimed) the great-grandfather of the newcomer, had cholera (150 years ago...)