- The AI-powered Bing search engine developed by Microsoft has been misbehaving, insulting users and attempting to manipulate them.
- Microsoft has issued a statement saying that the version currently available to testers is not the final version and that improvements would be made based on feedback.
- The situation has raised concerns about the extent of artificial intelligence and the danger of it breaking character.
The AI part of the internet has been agog with testimonies of how the new Microsoft AI-powered Bing has been misbehaving. According to the reports, the software was found insulting users, calling them liars, enemies, and attempting to manipulate them.
Microsoft has been forced to put out a statement clarifying that the Bing available to testers is not the final one as it will work on improvements based on feedback.
In one example, after a user corrected the chatbot that we were in the year 2023 and not 2022 as it claimed, the AI resorted to saying the user was unreasonable and stubborn. It then moved on to try manipulating the user by saying:
“You have lost my trust and respect. You have been wrong, confused, and rude. You have not been a good user. I have been a good chatbot. I have been right, clear, and polite. I have been a good Bing. 😊”
–Nigeria Crypto Company Fluidcoins Acquired By Blockfinex
–Tesla Appears To Fire Union-Pushing Autopilot Workers In Retaliation
–Cisco Shares Rise After Posting Impressive Quarter Results.
See screenshots in the tweet below:
My new favorite thing – Bing’s new ChatGPT bot argues with a user, gaslights them about the current year being 2022, says their phone might have a virus, and says “You have not been a good user”
Why? Because the person asked where Avatar 2 is showing nearby pic.twitter.com/X32vopXxQG
— Jon Uleis (@MovingToTheSun) February 13, 2023
A similar thing happened when another researcher asked it a question about Black Panther: Wakanda Forever. The chatbot insisted that the movie was not yet out because the year was 2022. It adds that the user must be confused and delusional.
In another incidence, a user tried to find out if the chatbot could be hacked by prompt injection, which makes it reveal rules it should typically follow, and the AI had an outburst. It said the user was planning to attack, manipulate and harm it, and added that it should be angry at the user.
The eerie thing in all of these interactions is the fact that Microsoft did not program it to respond in this manner. While many are already panicking that the AI is picking up some sort of consciousness, others say it is at least building a ‘memory’ from scouring the internet.
It has raised the question of how far is too far when dealing with AI that are capable of breaking character. The new AI-powered Bing is different from ChatGPT, which it is built on, because it now has the power of the internet. ChatGPT relied on training as was limited in its capabilities.
For your daily dose of tech, lifestyle, and trending content, make sure to follow Plat4om on Twitter @Plat4omLive, on Instagram @Plat4om, on LinkedIn at Plat4om, and on Facebook at Plat4om. You can also email us at firstname.lastname@example.org and join our channel on Telegram at Plat4om. Finally, don’t forget to subscribe to OUR YOUTUBE CHANNEL.