Supported by Microsoft (MSFT, Financial data) OpenAI is implementing new strategies to improve its next big language model, named Orion, which is said to show only marginal performance gains compared to ChatGPT-4, according to a report from The Information.
Based on people with knowledge of the circumstances, research shows that Orion’s developments are inferior to previous iterations, including the move from GPT-3 to GPT-4.
The limited availability of high-quality training data, which is already becoming scarcer since AI developers have already processed most of the available data, is a key factor in the observed slowdown in development. Orion’s training therefore included synthetic AI-generated data, forcing the model to exhibit similar characteristics to its ancestors.
OpenAI and other teams are augmenting synthetic training with human input to help overcome these constraints. Human reviewers evaluate models by posing coding and problem-solving challenges, refining solutions through iterative feedback.
OpenAI is also working with other companies such as Scale AI and Turing AI to perform this deeper analysis, The Information reported.
“As a matter of general knowledge, one could argue that for now we are seeing a plateau in LLM performance,” Ion Stoica, co-founder of Databricks, said in the report. We need factual data, and synthetic data is not much help.
As OpenAI CEO Sam Altman pointed out earlier this month, computing power restrictions pose yet another barrier to improving AI capabilities.
Microsoft, in its latest first quarter financial results, reported net income of $24.67 billion, up 10.66% from a year earlier, and sales of $65.58 billion, an increase of 16% over one year. At $3.30, earnings per share reflect an increase of 10.37%.
This article first appeared on GuruFocus.