Private Company Defaults Prompt Caution in AI Infrastructure
With private company defaults soaring to over 9.2%—the highest rate seen in recent years—venture capital firm Lux Capital has advised artificial intelligence companies to secure their compute capacity commitments in writing. As financial uncertainty permeates the AI supply chain, a mere handshake agreement has become inadequate.
Multiverse Computing Offers Alternatives to Traditional Infrastructure
Another viable option is to forgo reliance on external compute infrastructure altogether. Smaller AI models designed to operate directly on a user’s personal device—eliminating the need for a data center, cloud provider, or counterparty risk—are emerging as compelling alternatives. Spanish startup Multiverse Computing is vying for attention within this niche.
Rising Demand for Efficient AI Solutions
Though previously less prominent than its competitors, Multiverse Computing’s profile is growing as the demand for efficient AI solutions intensifies. The startup has successfully compressed models from leading AI organizations, including OpenAI, Meta, DeepSeek, and Mistral AI. It recently introduced an app to showcase these capabilities and an API portal that provides developers with direct access to its models.
CompactifAI App Enhances Local AI Capabilities
The CompactifAI app, named after the company’s quantum-inspired compression technology, serves as an AI chat tool akin to ChatGPT or Mistral’s Le Chat. Users can pose questions and receive answers, with Multiverse embedding a lightweight model named Gilda, designed to operate locally and offline.
Challenges and Limitations of Mobile Deployment
This app offers a glimpse into edge computing, safeguarding user data by keeping it on their devices without relying on an internet connection. However, it requires mobile devices to possess sufficient RAM and storage; otherwise, it defaults to cloud processing via an API, thereby negating its primary privacy advantage. Multiverse has implemented a system named Ash Nazg, which manages the seamless transition between local and cloud resources.
Business-Focused Model with a Self-Serve API Portal
While the CompactifAI app has garnered fewer than 5,000 downloads in the past month, its primary audience is businesses. To facilitate this, Multiverse has launched a self-serve API portal that enables developers and enterprises to effortlessly access its compressed models without the need for an intermediary marketplace.
Enterprise Benefits of Smaller AI Models
CEO Enrique Lizaso emphasized that the API portal provides essential transparency and control for developers running compressed models in production. Key features include real-time usage monitoring, which aligns with enterprises’ objectives, particularly in considering smaller models as an economical alternative to large language models (LLMs).
Advances in AI Model Capabilities
Recent developments indicate that smaller models are evolving. For instance, Mistral has unveiled updates to its small model family, including the Mistral Small 4, which can handle general chat, coding, agentic tasks, and reasoning. Additionally, Multiverse’s latest compressed model, HyperNova 60B 2602, reports faster and more cost-effective responses compared to its OpenAI-derived predecessor, enhancing its applicability for complex programming tasks.
Expanding Use Cases and Customer Base
The ability to deploy models locally provides enhanced privacy and resilience, unlocking significant business opportunities, especially in sectors where connectivity is unreliable. Multiverse already supports over 100 global clients, including prominent entities like the Bank of Canada and Bosch. Following a successful $215 million Series B funding round last year, the company is reportedly seeking to raise an additional €500 million, aiming for a valuation exceeding €1.5 billion.
