Here at Voyage AI, we’re on a mission to help you build the very best RAG and semantic search applications. As such, we’ve released industry-leading embedding models & rerankers, partnered with preeminent AI companies such as Databricks, Snowflake, Anthropic, Harvey, & Xayn, and built integrations with leading vector database companies.
To further this mission, we’re pleased to announce that we’ve secured $20 million in Series A funding led by CRV with participation from existing investors Wing VC and Conviction, along with participation from Snowflake, Databricks, Pear VC, Tectonic Ventures, Mayfield Fund, and Fusion Fund, bringing the company’s total funding to $28 million. With this investment, we aim to expand our offerings and continue to provide you with the most advanced models for search and retrieval across unstructured data.
Coinciding with our fundraise, we’re also happy to announce the general availability of two new general-purpose embedding models and two new rerankers:
voyage-3andvoyage-3-lite: Both of these models outperform OpenAI’s latest large embedding model in terms of accuracy, context length, and cost-effectiveness.voyage-3outperforms OpenAI v3 large by 7.55% with 2.2x lower costs and 3x fewer embedding dimensions.voyage-3-liteoffers 3.82% better retrieval accuracy than OpenAI v3 large while costing 6x less and having 6x fewer embedding dimensions. Both models support a 32K-token context length as well, 4x more than OpenAI. More details can be found in our blog post.rerank-2andrerank-2-lite: When used on top of OpenAI’s latest embedding model (text-embedding-3-large),rerank-2andrerank-2-liteimprove accuracy by an average of 13.89% and 11.86%, respectively, 2.3x and 1.7x the improvement attained by the latest Cohere reranker (rerank-english-v3.0). Furthermore,rerank-2andrerank-2-litesupport context lengths of 16K and 8K tokens — 4x and 2x the context length of Cohere’s reranker. More details can be found in our blog post.

These models are now available via the Voyage API and through the AWS Marketplace. In addition to these existing deployment options, Voyage models are now available as deployable endpoints on the Azure Marketplace and integrated into Snowflake Cortex AI as well. The Azure offering allows you to deploy Voyage models as real-time inference endpoints directly in yur virtual network. By integrating with Snowflake Cortex AI, you can now leverage Voyage models within your Snowflake environments, enhancing data analysis capabilities without the need for complex data migrations.
We’ve also been hard at work upgrading our serving infrastructure as well, and we’re happy to announce that new and existing users will benefit from 200 million free tokens for all of our new models in addition to higher rate limits, as outlined in our documentation.
We’re incredibly excited to see what you will build with Voyage AI models – thank you for joining this incredible journey with us. Onward and upward!
You must be logged in to post a comment.