OpenAI’s outsourcing partner was Sama, a training-data company based in San Francisco, California. These labels were used to train a model to detect such content in the future. The ethics of its development, particularly the use of copyrighted content as training data, have also drawn controversy. The chatbot can facilitate academic dishonesty, generate misinformation, and create malicious code. The service gained 100 million users in two months, making it the fastest-growing consumer software application in history.
OpenAI under fire: Can chatbots ever truly be child-safe?
ChatGPT is a chatbot and AI assistant built on large language model (LLM) technology. OpenAI collects data from ChatGPT users to further train and fine-tune its services. These rankings were used to create “reward models” that were used to fine-tune the model further by using several iterations of proximal policy optimization. ChatGPT is based on GPT foundation models that have been fine-tuned for conversational assistance. It uses generative pre-trained transformers (GPTs), such as GPT-5, to generate text, speech, and images in response to user prompts.
Version 1.2026.013
ChatGPT has been used to generate introductory sections and abstracts for scientific articles. Additionally, using a model’s outputs might violate copyright, and the model creator could be accused of vicarious liability and held responsible for that copyright infringement. When assembling training data, the sourcing of copyrighted works may infringe on the copyright holder’s exclusive right to control reproduction, unless covered by exceptions in relevant copyright laws. Juergen Schmidhuber said that in 95% of cases, AI research is about making “human lives longer and healthier and easier.” He added that while AI can be used by bad actors, it “can also be used against the bad actors”.
Perplexity – AI Search & Chat
In January 2023, Science “completely banned” LLM-generated text in all its journals; however, this policy was just to give the community time to decide what acceptable use looks like. In August 2024, OpenAI announced it had created https://www.luckytwicecasino.eu/ a text watermarking method but did not release it for public use, saying that users would go to a competitor without watermarking if it publicly released its watermarking tool. In the reinforcement learning stage, human trainers first ranked responses generated by the model in previous conversations. Many individuals use ChatGPT and comparable large language models mental health and emotional support.
Chris Granatino, a librarian at Seattle University, noted that while ChatGPT can generate content that seemingly includes legitimate citations, in most cases those citations are not real or largely incorrect. Robin Bauwens, an assistant professor at Tilburg University, found that a ChatGPT-generated peer review report on his article mentioned nonexistent studies. Some, including Nature and JAMA Network, “require that authors disclose the use of text-generating tools and ban listing a large language model (LLM) such as ChatGPT as a co-author”. Over 20,000 signatories including Yoshua Bengio, Elon Musk, and Apple co-founder Steve Wozniak, signed a March 2023 open letter calling for an immediate pause of giant AI experiments like ChatGPT, citing “profound risks to society and humanity”. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information.
ChatGPT Health
The ChatGPT-generated avatar told the people, “Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year’s convention of Protestants in Germany”. As of July 2025, Science expects authors to release in full how AI-generated content is used and made in their work. Popular deep learning models are trained on mass amounts of media scraped from the Internet, often utilizing copyrighted material. As of 2023, there were several pending U.S. lawsuits challenging the use of copyrighted data to train AI models, with defendants arguing that this falls under fair use.
The chatbot can assist patients seeking clarification about their health. The uses and potential of ChatGPT in health care has been the topic of scientific publications and experts have shared many opinions. Many companies adopted ChatGPT and similar chatbot technologies into their product offers. The Guardian questioned whether any content found on the Internet after ChatGPT’s release “can be truly trusted” and called for government regulation. In June 2023, hundreds of people attended a “ChatGPT-powered church service” at St. Paul’s Church in Fürth, Germany.
These limitations may be revealed when ChatGPT responds to prompts including descriptors of people. ChatGPT’s training data only covers a period up to the cut-off date, so it lacks knowledge of recent events. To implement the feature, OpenAI partnered with data connectivity infrastructure company b.well. On January 7, 2026, OpenAI introduced a feature called “ChatGPT Health”, whereby ChatGPT can discuss the user’s health in a way that is separate from other chats. In 2025, OpenAI added several features to make ChatGPT more agentic (capable of autonomously performing longer tasks).
In November 2023, OpenAI released GPT Builder a tool for users to customize ChatGPT’s behavior for a specific use case. GPT-based moderation classifiers are used to reduce the risk of harmful outputs being presented to users. ChatGPT is frequently used for translation and summarization tasks, and can simulate interactive environments such as a Linux terminal, a multi-user chat room, or simple text-based games such as tic-tac-toe. It is designed to generate human-like text and can carry out a wide variety of tasks.
In September 2025, OpenAI added a feature called Pulse, which generates a daily analysis of a user’s chats and connected apps such as Gmail and Google Calendar. In December 2024, OpenAI launched a new feature allowing users to call ChatGPT with a telephone for up to 15 minutes per month for free. An optional “Memory” feature allows users to tell ChatGPT to memorize specific information. ChatGPT’s training data includes software manual pages, information about internet phenomena such as bulletin board systems, multiple programming languages, and the text of Wikipedia.
OpenAI said it has taken steps to effectively clarify and address the issues raised; an age verification tool was implemented to ensure users are at least 13 years old. Italian regulators assert that ChatGPT was exposing minors to age-inappropriate content, and that OpenAI’s use of ChatGPT conversations as training data could violate Europe’s General Data Protection Regulation. A shadow market has emerged for Chinese users to get access to foreign software tools. ChatGPT also provided an outline of how human reviewers are trained to reduce inappropriate content and to attempt to provide political information without affiliating with any political position. In December 2023, ChatGPT became the first non-human to be included in Nature’s 10, an annual listicle curated by Nature of people considered to have made significant impact in science. ChatGPT gained one million users in five days and 100 million in two months, becoming the fastest-growing internet application in history.
O1 is designed to solve more complex problems by spending more time “thinking” before it answers, enabling it to analyze its answers and explore different strategies. In September 2024, OpenAI introduced o1-preview and a faster, cheaper model named o1-mini. On July 18, 2024, OpenAI released GPT-4o mini, a smaller version of GPT-4o which replaced GPT-3.5 Turbo on the ChatGPT interface. OpenAI has not revealed technical details and statistics about GPT-4, such as the precise size of the model. The term “hallucination” as applied to LLMs is distinct from its meaning in psychology, and the phenomenon in chatbots is more similar to confabulation or bullshitting. A 2023 analysis estimated that ChatGPT hallucinates around 3% of the time.
- According to OpenAI, it was intended to reduce hallucinations and enhance pattern recognition, creativity, and user interaction.
- In late March 2023, the Italian data protection authority banned ChatGPT in Italy and opened an investigation.
- Efforts to ban chatbots like ChatGPT in schools focus on preventing cheating, but enforcement faces challenges due to AI detection inaccuracies and widespread accessibility of chatbot technology.
- ChatGPT has been used to generate introductory sections and abstracts for scientific articles.
- Xnxx2.cc has a zero-tolerance policy against illegal pornography.
- ChatGPT’s training data only covers a period up to the cut-off date, so it lacks knowledge of recent events.
- ChatGPT is frequently used for translation and summarization tasks, and can simulate interactive environments such as a Linux terminal, a multi-user chat room, or simple text-based games such as tic-tac-toe.
- The chatbot technology can improve security by cyber defense automation, threat intelligence, attack identification, and reporting.
- In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information.
- Italian regulators assert that ChatGPT was exposing minors to age-inappropriate content, and that OpenAI’s use of ChatGPT conversations as training data could violate Europe’s General Data Protection Regulation.
- In 2023, OpenAI worked with a team of 40 Icelandic volunteers to fine-tune ChatGPT’s Icelandic conversation skills as a part of Iceland’s attempts to preserve the Icelandic language.
- The content available may include pornographic material.
- The chatbot has also been criticized for its limitations and potential for unethical use.
GPT-5 was launched on August 7, 2025, and is publicly accessible through ChatGPT, Microsoft Copilot, and via OpenAI’s API. According to OpenAI, it was intended to reduce hallucinations and enhance pattern recognition, creativity, and user interaction. Released in February 2025, GPT-4.5 was described by Altman as a “giant, expensive model”.
One of the most significant improvements was in the generation of text within images, which is especially useful for branded content. In March 2025, OpenAI updated ChatGPT to generate images using GPT Image instead of DALL-E. In October 2023, OpenAI’s image generation model DALL-E 3 was integrated into ChatGPT.
In the UK, a judge expressed concern about self-representing litigants wasting time by submitting documents containing significant hallucinations. In November 2025, OpenAI acknowledged that there have been “instances where our 4o model fell short in recognizing signs of delusion or emotional dependency”, and reported that it is working to improve safety. In medical education, it can explain concepts, generate case scenarios, and be used by students preparing for licensing examinations. ChatGPT shows inconsistent responses, lack of specificity, lack of control over patient data, and a limited ability to take additional context (such as regional variations) into consideration.