We introduce the process of building a system that automatically extracts and normalizes financial statements from PDFs in various formats using Large Language Models (LLMs). We cover data model design with Structured Output and Pydantic, the extraction process through Google Gemini API, and post-processing methods applicable to real-world scenarios, all implemented in about 200 lines of code.

Read Post

Explains the limitations and characteristics of vector embeddings and covers the improvements made to store them.

Read Post

Explains how to build a natural language search service by applying vector search to a case law search demo using FTS.

Read Post

Explains the process of downloading case law data and building a case law search service in just one day using Cognica.

Read Post

We explain the process of data collection and processing, search, and service development for product search using Cognica. Learn how to index when structured and unstructured data are mixed, and how to transform queries for search using LLM.

Read Post

Copyright © 2024 Cognica, Inc.

Made with ☕️ and 😽 in San Francisco, CA.