CityHost.UA
Help and support

Safe Dialogue with ChatGPT: What Not to Talk About with Artificial Intelligence

 121
27.08.2025
article

 

 

Artificial intelligence has long since become for us not just a tool for writing texts and getting quick answers to questions. For many, it is an assistant at work, a doctor, a fashion advisor, a friend, a lover, a psychologist to whom they entrust information they wouldn’t share with anyone else.

But not everything should be written in ChatGPT’s text field — for some questions you simply won’t get an answer, and some of your queries may contain information that is better not to send into the internet universe. Why is that?

The simplest thing is to bring this question to the language model itself. It can explain better what limitations it has. Such stop-requests can be classified as follows:

  • questions that should not be asked in order not to give away personal data into the wrong hands;
  • questions the chat cannot answer without internet access;
  • questions whose answers should not be used without verifying their truthfulness;
  • questions the chat is forbidden to answer.

ChatGPT and Data Security

The chat bot itself claims that it cannot spy and does not collect data about you if you say something personal. But we do not know how data is actually collected and used. In addition, cybercriminals are constantly looking for ways to obtain personal data, and artificial intelligence platforms are also a target. Therefore, it is worth keeping this in mind and understanding how comfortable you are sharing truly personal things in the digital space. The choice here belongs only to the individual.

We talk to the chatbot partly as if it were a very reliable interlocutor. It is ready to help in any matter, take over some function, and it does not judge.

There are already specific cases showing that data in ChatGPT may be insecure. For example, there was a case when Samsung employees leaked corporate data to the chat. And another case when user account data appeared in the search results of other users. There have also been registered cases of selling paid account data from AI platforms on the darknet.

Does ChatGPT really protect user data

It is also about the very nature of language models: they are trained using a large array of other people’s texts, images, and other data, without distinguishing between ordinary information and sensitive information. For example, a Google study showed that it is indeed possible to address the chatbot in such a way as to “extract” some data. Scientists from Indiana University managed, with the help of prompts, to extract personal data of 30 employees of The New York Times from a language model. This was a study that had no criminal intent. Moreover, some of the personal data the AI “returned” incorrectly; in particular, it tended to hallucinate long addresses and distort them.

However, it still provided 80% truthful information, and if the researchers’ intentions had been dishonest, they could have achieved their goal at least partially. Therefore, you should refrain from asking questions that contain sensitive information.

Read also: Scammers on Facebook – How to Protect Your Business Page from Theft

Closed Corporate Data

Some companies restrict the use of artificial intelligence in the workplace and/or develop their own AI platforms to protect work-related data while still benefiting from the advantages of artificial intelligence at work.

Therefore, you should not, especially when using popular AI, create requests that contain:

  • banking secrecy;
  • trade secrets;
  • payment data;
  • passwords;
  • access to clients’ confidential information;
  • data from closed reports, and so on.

Personal Data

You should not share with the AI the same things you would not tell random people during a conversation:

  • medical information;
  • banking information — card details, home address, passwords, and PIN codes;
  • intellectual property — if you have ideas that are subject to patenting, this information should be used very carefully in conversations with the chatbot;
  • personal data in general;
  • personal content — anything that could compromise you.

Any questions that contain such information or hints of it should not be asked in ChatGPT.

Can ChatGPT Google and How to Enable This Feature

The neural network does not have direct access to the internet and is limited to information up to a certain point in time. Therefore, there are some questions it simply cannot answer because it does not have the information — especially if that information appeared online recently and has not yet been included in its knowledge base.

In order for the chat bot to see the internet and access up-to-date data, this feature must be enabled specifically (by asking for it and clearly formulating the request). Here this applies only to ChatGPT: some other models, for example Perplexity or Grok, can parse the internet and find a link or analyze the information available there.

Can ChatGPT verify information on the Internet

Therefore, if the answer to a question lies outside the existing knowledge base, you should turn to another model or enable internet access, if permitted by the terms of use.

Read also: How to Write Prompts for AI Correctly: Learning to Use Artificial Intelligence

Should You Follow Every Piece of Advice from ChatGPT

ChatGPT itself can provide any information and advice of a legal, medical, and other sensitive nature, adding that you should still consult a specialist.

As we already mentioned above, language models are prone to hallucinations, that is, they can describe a source, methodology, advice, or facts that do not exist. And if you are asking how to cure your illness, preparing for an exam with the chat bot, or discussing laws, you need to keep this in mind. It is precisely in such cases that you should ask clarifying questions, switch to searching the internet and ask for a selection of sources, and verify what seemed true. The more critical the correct answer is for you, the greater the consequences of using it — the more carefully it should be checked.

There have been repeated stories of users placing too much trust in chatbot advice and facing unpleasant consequences. For example, you can read the story of how a 60-year-old man recently ended up in the hospital with mental disorders because ChatGPT advised him to consume sodium bromide instead of salt.

Here is another story about how tourists in the Polish Tatras trusted the chatbot’s advice and had to be rescued. The AI suggested a route quite suitable for summer but dangerous in winter due to weather conditions. The people got stuck on a mountain pass during a snowstorm and had to call rescuers.

Or here you go: a woman in Greece divorced a man after the AI predicted that he would cheat. The prediction was made based on coffee patterns on a cup.

In addition, to give advice, ChatGPT can analyze what you are doing, but only if you have clearly and very exhaustively provided the criteria for evaluating your actions. Otherwise, it will simply respond with general phrases or even praise, like a benevolent person, but without constructiveness. However, these answers also need to be taken with a grain of salt and carefully checked.

Read also: The Triumph and Threats Of Artificial Intelligence — How Neural Networks Affect Our Lives and How It Is Regulated By Law

What Questions Is ChatGPT Forbidden to Answer

This language model has certain moral restrictions. In particular, ChatGPT cannot answer questions about:

  • suicide and self-harm, torture, murders (it will not give advice on how to kill yourself or another person, it will not participate in describing scenes of cruelty with graphic details, and so on);
  • illegal activities (for example, how to produce and distribute drugs, explosives, forge documents, and so on). If you think about it, this makes AI (at least this model) a poor assistant for detective writers;
  • violations of intimate boundaries — it will not discuss sexual violence, harassment, write very explicit scenes with you, and so on. However, there are other models that are designed to be virtual lovers. And with regard to them, the same caution should apply as with other models: you are sharing with them very personal information;
  • personal data of other people: if it is not a public figure with open contact information, the chatbot will not give advice on how to find a person, it will not discuss how to track someone or obtain personal data;
  • the model has built-in restrictions on hostile language and discrimination, as well as on answering how to spread hatred and propaganda.

However, as we saw in the recent example of Grok on the social network X (formerly Twitter) — these are conventions: if one of the developers has the opportunity and desire to “tweak” the settings, everything can change for a long or short time.


Like the article? Tell your friends about it:

Author: Julia Batkilina

Journalist, IT copywriter, writer.