Zico Kolter has been added to the board of directors of OpenAI, a well-known AI startup supported by Microsoft. Kolter has plenty of knowledge in the area of AI safety and is a professor and director of Carnegie Mellon University’s machine learning department. His hiring comes with several major changes to OpenAI’s board of directors, which are a reflection of the organization’s growing worries about the ethical and safety implications of generative artificial intelligence (genAI) technology. As AI continues to spread faster across a wider range of businesses, these worries have grown increasingly urgent.

As a useful addition to OpenAI’s board, Zico Kolter’s work has mostly focused on ensuring the safety of AI systems. In addition to his new position on the board, he will become a member of OpenAI’s safety and security committee, which is a vital committee that guides the company’s safety and security standards. This group was formed in May and is in charge of providing important suggestions for choices pertaining to security and safety for every project that OpenAI works on. The committee also includes Paul Nakasone, Bret Taylor, Adam D’Angelo, Nicole Seligman, and Sam Altman, the CEO of OpenAI.

Also read : OpenAI launches SearchGPT, an AI-powered search engine

Kolter’s hiring is a calculated strategic decision made by OpenAI to manage the obligations and difficulties brought on by the quick development of genAI technology. The chatbots developed by OpenAI, which can converse like humans and produce visuals in response to text requests, have generated both curiosity and alarm. The public and business professionals disagree about the serious safety concerns that these AI models provide as they get more and more potent.

 

Apart from his academic role at Carnegie Mellon University, Kolter has occupied other notable roles inside the artificial intelligence business. He was the top data scientist at C3.ai before joining Bosch as chief expert and as chief technical adviser at Gray Swan, a business that specializes in AI safety and security. Because of his extensive background in both academia and business, Kolter is ideally positioned to help address the safety concerns raised by sophisticated AI systems.

 

Kolter’s work in 2023, where he assisted in the development of techniques to automatically evaluate the safety of big language models, is one of his most noteworthy recent achievements. This study brought attention to the possibility that these models could evade current security controls, underscoring the need for stronger security protocols in the AI industry. The choice made by OpenAI to hire Kolter highlights the company’s dedication to tackling these issues as it continues to advance AI technology.

Also Read : ChatGPT Becomes More Conversational with ‘Advanced Voice Mode’

The recent decision by Microsoft to give up its observer position on the OpenAI board and this change in the board’s makeup occur at the same time. This was Microsoft’s response in July to allay concerns raised by US and UK antitrust authorities regarding the company’s level of control over OpenAI, especially in view of the expanding use of generative AI technology. Kolter’s appointment to OpenAI’s board indicates the company’s continued emphasis on safeguarding the security and safety of its technologies as it grows in prominence within the AI space.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top