We introduce the process of building a system that automatically extracts and normalizes financial statements from PDFs in various formats using Large Language Models (LLMs). We cover data model design with Structured Output and Pydantic, the extraction process through Google Gemini API, and post-processing methods applicable to real-world scenarios, all implemented in about 200 lines of code.

Read Post

The Monic framework addresses REST's endpoint explosion problem by allowing clients to express their intent as computational expressions through a single /compute endpoint, redefining APIs as a "Computational Interface" integrated with the database.

Read Post

We discuss why NOT operations are difficult in vector search.

Read Post

Explains the limitations and characteristics of vector embeddings and covers the improvements made to store them.

Read Post

Explains how to build a natural language search service by applying vector search to a case law search demo using FTS.

Read Post

Explains the process of downloading case law data and building a case law search service in just one day using Cognica.

Read Post

We explain the process of data collection and processing, search, and service development for product search using Cognica. Learn how to index when structured and unstructured data are mixed, and how to transform queries for search using LLM.

Read Post