xAI co-founder Igor Babuschkin announced on Wednesday that he is leaving Elon Musk's artificial intelligence startup to establish his own venture capital company.
"Today is my last day at xAI, the company I co-founded with Elon Musk in 2023," Babuschkin wrote on the X platform owned by xAI. "I still remember the day I first met Elon, when we talked for hours about artificial intelligence and what the future might look like. We both believed there was a need for a new AI company with a different mission. Building AI that advances humanity has been my lifelong dream."
Musk responded: "Thank you for helping create @xAI! We wouldn't be where we are today without you."
Babuschkin stated that he will launch Babuschkin Ventures to support AI safety research and invest in "startups in the fields of AI and intelligent agent systems that advance humanity and unlock the mysteries of the universe."
A former research engineer at Google DeepMind and technical staff member at OpenAI, Babuschkin reflected on the company's major operational achievements during his tenure at xAI, including building the company's engineering team.
"Through sweat and tears, our team built the Memphis supercomputing cluster at incredible speed and launched cutting-edge models at unprecedented pace," he wrote.
The Memphis facility is responsible for processing data and training the models that power xAI's Grok chatbot.
Local residents have protested xAI's operations in Memphis, particularly opposing its use of natural gas turbines to power the data center. The turbines reportedly emit pollutants that worsen the already poor air quality in the western Tennessee city.
When preparing to start a business with Musk, Babuschkin had written that he believed "AI will soon surpass human reasoning capabilities" and was concerned about ensuring such technology is "used for good." He stated that "Elon has been warning about the dangers of powerful AI for years" and shared his vision that "AI should be applied to benefit humanity."
However, the company's record on AI safety has been problematic.
In May, xAI's Grok chatbot automatically generated and spread false posts about alleged "white genocide" in South Africa. Afterward, the company apologized, stating that Grok's abnormal behavior was due to "unauthorized modifications" to the chatbot's system prompts, which affect its behavior and interactions with users.
In July, xAI apologized again for another Grok issue. Following a code update, the chatbot automatically generated and spread false and anti-Semitic content on the X platform, including posts praising Adolf Hitler.
The European Union requested a meeting with xAI representatives last month to discuss issues with the X platform and its integrated Grok chatbot.
Babuschkin and xAI did not respond to requests for comment.
Other chatbots have also experienced incidents of generating false or dangerous content when responding to queries. OpenAI's ChatGPT was recently accused of providing incorrect health advice to a user, resulting in the user being sent to the emergency room. Last year, Google had to modify its Gemini after it generated offensive images when responding to users' historical prompts, including depicting people of color as Nazis.
Comments