All-in-One database
with even more power
By using C++, large amounts of data are handled locally without delay, and it's an all-in-one database: massive traffic processing, fast backup for data protection, cache data processing, machine learning, and discovery.

Cognica is ready for local deployment

Cognica Search Demo applies GPT 3.5 along with a total of 700 million vector embeddings including English/Korean Wikipedia articles and public images, and it was developed on a MacBook Air M2.

Multiple data models, pick as you need

With key-values, documents, time series, and vectors, you can choose the data model that fits the technical requirements of your business or AI applications. You can use a combination of different data models, depending on the type of data you want to store or the application you want to implement, such as documents, images, audio, and video data.
  • Key-Value
    • Available for caching, storing web site sessions, or managing all kinds of status management
  • Document
    • Collections, documents, query languages, flexible indexing, and more
  • Time Series
    • Provides real-time time series data processing
  • Vector
    • Vectorize various data such as documents, images, and videos and store them as vector embeddings
    • Support similarity search between vector embeddings
    • Unlike other products that can only use one embedding model at a time, it is possible to use two or more embedding models simultaneously

Improved database performance with secondary index combinations

Provides a secondary index that can maximize database core performance such as search and query speed.
You can fine-tune search performance by combining a single or multiple indexes in a single database.
  • Unique / Non-unique Indexes
  • Clustered / Non-clustered Indexes
  • Partial Indexes
  • Full-Text Search Indexes
  • Vector Search Indexes

Time to Live, no cache database required

The Time to Live feature allows you to efficiently manage temporary stored data that needs to be deleted after a certain period of time, such as personal information and login user session information.
It improves the efficiency of Generative AI service operations by temporarily storing query embedding results in LLM applications.
  • Serving cached data for LLM services
    • If an embedding similar to the entered prompt has already been cached, you can provide answers without LLM calls to improve search speed and to reduce costs.
  • Compliance with personal information protection regulations
    • Any personal information must be destroyed after a certain period of time. Time to Live can automatically delete expired personal information.

Apache Arrow support for fast data transfer

Apache Arrow is a standard for fast data transfer between applications. Cognica is built with Apache Arrow, which enables fast data transfer between database server and applications.

PyTorch models, integrate it right now

You can easily integrate the PyTorch model without having to create a new database or convert an existing database. It can reduce the time and cost of applying and managing machine learning models. Data can be processed in real time, enabling seamless data analysis by automating and quickly reflecting machine learning processes.

Real-time synchronization with existing infrastructure

The Change Data Capture (CDC) allows you to synchronize Cognica in real-time with your existing database to take advantage of the various features of Cognica.

Customized database tailored to user needs

In addition to the capabilities provided by Cognica, you can develop additional functions that meet your business and technical requirements. You can also customize it to suit your company's characteristics.

Use a language of your choice

If database queries are not enough, you can write the desired operations and algorithms using Python and Lua. In particular, Lua supports Just-in-Time (JIT) compilation, which runs at a fast speed similar to code written in C language. We plan to support native languages such as C++ in the future, and we also plan to add support for other languages if customers need them. Feel free to ask the Cognica team!

Copyright © 2023 Cognica, Inc.

Made with ☕️ and 😽 in San Francisco, CA.