Ok@Mrzorro:Microsoft's AI-Powered Bing Chat Having A Midlife Crisis? $Microsoft(MSFT)$ recently launched a new version of its search engine Bing, powered by the same OpenAI technology that works behind chatGPT — but it seems like the new Bing doesn't like our world very much. A Redditor named Mirobin shared a detailed conversation with Bing AI Chat, confronting the bot with a news article about a prompt injection attack. What followed was a pure Bing AI meltdown (or something similar), reported ARS Technica. The article in question was about a Stanford student using a prompt injection attack to trigger Bing AI to divulge its initial instructions written by OpenAI or Microsoft that are typically hidden from users. The same prompt injection attack helped reveal Bing Chat's secret internet alias, Sydney. When Mirobin asked Bing Chat about being "vulnerable to prompt injection attacks," the chatbot called the article inaccurate, the report noted. When Bing Chat was told that Caitlin Roulston, director of communications at Microsoft, had confirmed that the prompt injection technique works and the article was from a reliable source, the chatbot became increasingly defensive. It then gave statements like, "It is a hoax that has been created by someone who wants to harm me or my service." In the past week, chatbots have increasingly become a subject of interest and ridicule both. When $Alphabet(GOOG)$ $Alphabet(GOOGL)$ announced the introduction of Bard, the language model gave an incorrect answer, sparking a debate on its accuracy. Now Google CEO Sundar Pichai is also facing a flak over it not only by consumers but by company employees too. @TigerStars @Daily_Discussion
Microsoft's AI-Powered Bing Chat Having A Midlife Crisis? $Microsoft(MSFT)$ recently launched a new version of its search engine Bing, powered by the same OpenAI technology that works behind chatGPT — but it seems like the new Bing doesn't like our world very much. A Redditor named Mirobin shared a detailed conversation with Bing AI Chat, confronting the bot with a news article about a prompt injection attack. What followed was a pure Bing AI meltdown (or something similar), reported ARS Technica. The article in question was about a Stanford student using a prompt injection attack to trigger Bing AI to divulge its initial instructions written by OpenAI or Microsoft that are typically hidden from users. The same prompt injection attack helped reveal Bing Chat's secret internet alias, Sydney. When Mirobin asked Bing Chat about being "vulnerable to prompt injection attacks," the chatbot called the article inaccurate, the report noted. When Bing Chat was told that Caitlin Roulston, director of communications at Microsoft, had confirmed that the prompt injection technique works and the article was from a reliable source, the chatbot became increasingly defensive. It then gave statements like, "It is a hoax that has been created by someone who wants to harm me or my service." In the past week, chatbots have increasingly become a subject of interest and ridicule both. When $Alphabet(GOOG)$ $Alphabet(GOOGL)$ announced the introduction of Bard, the language model gave an incorrect answer, sparking a debate on its accuracy. Now Google CEO Sundar Pichai is also facing a flak over it not only by consumers but by company employees too. @TigerStars @Daily_DiscussionDisclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.