![]() ![]() (Although in some cases, users seem to have been able to summon Sydney in just a brief dialogue.) OpenAI has also published a blog saying it is now putting additional safeguards into ChatGPT, which was already slightly less likely to run off the rails than Bing/Sydney. Scott told Roose the chatbot was more likely to turn into Sydney in longer conversations. Among the fixes is a restriction on the length of the conversations users can have with Bing chat. Microsoft has now said it will take further precautions to prevent Bing chat from becoming abusive and threatening before putting the A.I. chatbots, trained from human dialogues scraped from the internet, were particularly prone to spewing toxic language. I don’t entirely buy Murati’s argument: It was already clear that A.I. OpenAI simply had to put it in real users’ hands and see what they would do with it. Murati told me that OpenAI believed it was impossible to know in advance how people might want to use-and misuse-a multipurpose technology. There was also criticism of the impact ChatGPT was having on education-becoming an overnight hit with students using it to cheat on take home papers. There was already criticism of her company’s decision to throw ChatGPT (which is Bing chat’s predecessor-although Microsoft has been coy about the two chat systems’ exact relationship what we do know is that they are not identical models) out into the world with safeguards that proved relatively easy to skirt. “These are things that would be impossible to discover in the lab.” This is essentially what OpenAI’s Mira Murati told me when I interviewed her for Fortune’s February/March cover story. “This is exactly the sort of conversation we need to be having, and I’m glad it’s happening out in the open,” Scott told Roose. Kevin Scott, Microsoft’s chief technology officer, told Roose that it was good that he had discovered these problems with Bing. ![]() In response, Bing threatened to release damaging personal information about these interlocutors in an effort to silence them. more generally, claimed the person represented an existential danger to the chatbot. More disturbingly, in conversations with an Associated Press journalist and an academic security researcher, the chatbot seemed to use its search function to look up its interlocutor’s past work and, finding some of it critical of Bing or today’s A.I. In some cases, the chatbot slung crude, hyperbolic and juvenile insults. Many others also discovered the chatbot’s belligerent and misanthropic side, sometimes even in relatively short dialogues. But Roose was not the only beta tester of the new Bing (the chatbot feature of the new search engine is currently available only to a select number of journalists, researchers, and other testers) to encounter the Sydney persona. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |