Announcing our $28M fundraise

Here at Voyage AI, we’re on a mission to help you build the very best RAG and semantic search applications. As such, we’ve released industry-leading embedding models & rerankers, partnered with preeminent AI companies such as Databricks, Snowflake, Anthropic, Harvey, & Xayn, and built integrations with leading vector database companies.

To further this mission, we’re pleased to announce that we’ve secured $20 million in Series A funding led by CRV with participation from existing investors Wing VC and Conviction, along with participation from Snowflake, Databricks, Pear VC, Tectonic Ventures, Mayfield Fund, and Fusion Fund, bringing the company’s total funding to $28 million. With this investment, we aim to expand our offerings and continue to provide you with the most advanced models for search and retrieval across unstructured data.

Coinciding with our fundraise, we’re also happy to announce the general availability of two new general-purpose embedding models and two new rerankers:

  • voyage-3 and voyage-3-lite: Both of these models outperform OpenAI’s latest large embedding model in terms of accuracy, context length, and cost-effectiveness.  voyage-3 outperforms OpenAI v3 large by 7.55% with 2.2x lower costs and 3x fewer embedding dimensions. voyage-3-lite offers 3.82% better retrieval accuracy than OpenAI v3 large while costing 6x less and having 6x fewer embedding dimensions. Both models support a 32K-token context length as well, 4x more than OpenAI. More details can be found in our blog post.
  • rerank-2 and rerank-2-lite: When used on top of OpenAI’s latest embedding model (text-embedding-3-large), rerank-2 and rerank-2-lite improve accuracy by an average of 13.89% and 11.86%, respectively, 2.3x and 1.7x the improvement attained by the latest Cohere reranker (rerank-english-v3.0). Furthermore, rerank-2 and rerank-2-lite support context lengths of 16K and 8K tokens — 4x and 2x the context length of Cohere’s reranker. More details can be found in our blog post.
Left: NDCG@10 of different embedding models across various domains of data. Right: NDCG@10 of various rerankers when used on top of OpenAI’s latest embedding model.

These models are now available via the Voyage API and through the AWS Marketplace. In addition to these existing deployment options, Voyage models are now available as deployable endpoints on the Azure Marketplace and integrated into Snowflake Cortex AI as well. The Azure offering allows you to deploy Voyage models as real-time inference endpoints directly in yur virtual network. By integrating with Snowflake Cortex AI, you can now leverage Voyage models within your Snowflake environments, enhancing data analysis capabilities without the need for complex data migrations.

We’ve also been hard at work upgrading our serving infrastructure as well, and we’re happy to announce that new and existing users will benefit from 200 million free tokens for all of our new models in addition to higher rate limits, as outlined in our documentation.

We’re incredibly excited to see what you will build with Voyage AI models – thank you for joining this incredible journey with us. Onward and upward!

Tags:

Discover more from Voyage AI

Subscribe now to keep reading and get access to the full archive.

Continue reading