OpenAI CEO Sam Altman has confirmed that the company is not currently developing GPT-5, the next version of its Generative Pre-trained Transformer model. Instead, the company is focusing on enhancing the capabilities of GPT-4, the latest version. Altman made the announcement during an MIT event, where he was quizzed about AI by computer scientist and podcaster Lex Fridman.
Altman expressed support for the idea of ensuring AI models are safe and aligned with human values, but criticized the lack of technical nuance in an open letter that urged developers to pause training AI models larger than GPT-4 for six months. He noted that an earlier version of the letter claimed OpenAI was training GPT-5, which is not the case.
An earlier version of the letter claims we are training GPT-5 right now. We are not, and won’t for some time. So in that sense, it was sort of silly
OpenAI has not confirmed the exact number of parameters in GPT-4, but estimates suggest it is around one trillion. The company has described GPT-4 as “more creative and collaborative than ever before” and capable of “solving difficult problems with greater accuracy, thanks to its broader general knowledge and problem-solving abilities.”
The continued development of GPT-4 and other models built on top of it will likely raise further questions about the safety and ethical implications of such AI models. OpenAI has been a leading AI research lab and its GPT models have been used for language translation, chatbots, and content creation. However, the development of large language models has also raised concerns about their potential negative impacts.