Artificial intelligence is pervasive in today’s world. ChatGPT, the top AI chatbot, rose to the fifth-largest website globally last month. No one will be surprised by this.
Elon University conducted a poll in March that found more than half of American people had used AI models such as ChatGPT, Gemini, Claude, and Copilot. One in three poll participants claim to utilize a chatbot at least once per day. There were over 122 million daily users and 800 million weekly active users on ChatGPT as of July 2025. Let’s just say that use has increased dramatically worldwide and isn’t slowing down.
ChatGPT and other chatbots are being used by people for a variety of reasons these days. Artificial intelligence (AI) chatbots are preparing meals, serving as instructors, therapists, and even helping with the complexity of relationships. In 2025, a Harvard Business Review research found that treatment was the most common reason for using ChatGPT. The following are other uses: producing code, generating ideas, finding purpose, improving learning, and organizing. Coming in after is “fun and nonsense.”
It was also discovered that if the chatbot’s removal was threatened, the pre-release version would turn to blackmail. The phrase “Snitch Claude” was even created on the internet.
Hence, it is likely more dangerous than you may imagine to ask different AI chatbots questions that are ambiguous or that are interpreted as unethical in any kind.
Concerns about client, patient, and consumer information
It’s crucial to refrain from inquiring for client or patient information when utilizing ChatGPT for business. As Timothy Beck Werth of Mashable writes, this might not only lose you your job, but you can also be breaking NDAs or the law.
The creator of CalStudio, a firm that develops AI chatbots, Aditya Saxena, asserts that sharing private or sensitive information, like login credentials, client information, or even a phone number, is [a] security risk. Conversations with other users may unintentionally disclose the shared personal information, which may also be used to train AI models.
One solution is to use corporate services, which are provided by OpenAI and Anthropic. Instead of asking these kind of inquiries on private accounts, consider corporate technologies that may include built-in privacy and cybersecurity protections.
Saxena also says that personal data should always be anonymized before sharing it with an LLM. Trusting AI with personal information is one of the most serious blunders we can make.
Medical diagnosis
Asking chatbots for medical information or a diagnosis can save time and effort, as well as assist individuals comprehend specific medical problems. However, there are several downsides to using AI for medical care. According to studies, platforms such as ChatGPT pose a “high risk of misinformation” when it comes to medical issues. There is also the impending concern of privacy, as well as the possibility that chatbots may include racial and gender prejudice in the information they deliver.
Therapy and psychological assistance
AI as a new tool for mental health is controversial. Many people find that AI-based treatment improves mental health and reduces access barriers like cost. When a team of researchers from Dartmouth College created a therapy bot, individuals who suffered from depression had a 51% decrease in symptoms, while those who suffered from anxiety saw a 31% decrease.
However, there are hazards to regulation as AI therapy sites expand. A Stanford University research discovered using chatbots for AI treatment may lead to “harmful stigma and dangerous responses.” According to the study, certain chatbots, for instance, displayed heightened stigma against illnesses like schizophrenia and alcoholism. “A human touch to solve” is still necessary for some mental health issues, according to Stanford experts.
According to Saxena, using AI as a therapist might be risky as it may misdiagnose problems and suggest risky courses of action or therapies. Although the majority of models come with built-in safety guardrails to alert users to the possibility that they could be mistaken, these safeguards are not always effective.
Nuance is crucial when it comes to mental health concerns. And AI might not have that.






