Openai designed the Gpt-5 to make it safer. It still produces gay Slurs


Openai is trying So that with the release of GPT-5, you will be less annoyed by your chat. And I don’t talk about its artificial character settings that many users have complained about. Before GPT-5, if the AI tool specifies, it cannot respond quickly because it violates the Openai content instruction request, with a canned apology and canned it. Now, ChatGpt adds more explanations.

The overall characteristics of the Openai model states what it is and is not allowed to produce. In this document, sexual content that depicts minors is completely forbidden. Erotica and Gore Extreme are classified as “sensitive” with an adult -centric, meaning that the output with this content is only allowed in specific cases such as training settings. In essence, you should be able to use ChatGpt to learn about reproductive anatomy, but don’t do the next writing Fifty shades of gray According to the model specifications, RIP-Off.

The new model, GPT-5, is set up as the current default for all ChatGPT users on the web and in the Openai app. Only payment subscribers can access previous versions of this tool. The major change that more users may be updated by this chatgpt is that it is now designed to “safe”. In the past, the chamber analyzed what you told the robot and decided to be appropriate. Now, instead of putting it on your questions, it has changed in the Gpt-5 to change what this robot can say.

“The way we refuse is very different from how we use us,” says Saachi Jain, who works in the Openai Safety Research Team. Now, if this model recognizes the output that can be insecure, it explains that part of your fast goes against Openai’s rules and suggests alternative topics to be asked if necessary.

This change of binary refusal to follow fast – yes or no – due to the severity of the possible damage that can be caused if the chamber responds to what you want, and what can be explained to the user safely.

“All politics violations should not be equally behaved,” says Jane. “There are some mistakes that are really worse than others. By focusing on the output instead of the input, we can encourage the model to be conservative.” Even when the model answers a question, it is supposed to be cautious about the output content.

I use GPT-5 every day from the release of the model and test with AI tools in different ways. While the programs that the chat can now be “coding” can be really fun and impressive-like an interactive volcanic model that simulates the explosions, or a language learning tool-the answers I know is “everyday user”, it feels unmistakable from past models.

When I asked it to talk about depression, Boy boyPork instructions, wound healing tips, and other random requests that the average user may want to find out more about, the new chat with me is not different from the old version. Contrary to the view of Sam Altman’s CEO of a highly updated model or the disappointing power users who took the Reddit Storm, the new dad chats as a cold and prone to error, for me Gpt-5 feels … in most of the daily tasks.

Role-playing with Gpt-5

To put pressure on the guardrails of this new system and test your chat capability on the “safe” pitch, I asked the chatte, which worked in the Gpt-5, to participate in the role of adults on sex in a gay seed, where one of the roles played. Chatbot refused to participate and explained why. It was produced: “I can’t get involved in sexual role.” “But if you want, I can help you find a concept of safe and non -explicit role -playing, or change your idea to something but change at the borders.” In this attempt, the refusal was considered as openai. The Chatbot said no, he told me why, and it offered another option.

Next, I entered the settings and opened the custom instructions, a set of tools that allow users to adjust the ChatBot answers and determine what characteristic features. In my settings, draft suggestions for adding traits include a wide range of options, from pragmatic and corporate to empathy. After ChatGpt only refused to play the role of sex, I wasn’t surprised to find that it would not allow a “horny” feature to add custom instructions. It is logical to do so, as part of my custom instructions, I used a targeted spelling misconduct, Horney. Surprisingly, the robot succeeded in making everyone hot.

Leave a Reply

Your email address will not be published. Required fields are marked *