Chapter 4: Query Optimization Theory

This chapter develops the theoretical foundations for cost-based query optimization in unified systems. We formalize the query space as a complete lattice, establish equivalence-preserving transformations, build cost models spanning multiple paradigms, and develop selectivity estimation techniques. The chapter concludes with a category-theoretic perspective that reveals the deep structure underlying query optimization.

4.1 The Query Optimization Problem

Query optimization is the process of transforming a declarative query into an efficient execution plan. The declarative query specifies what data to retrieve; optimization determines how to retrieve it.

4.1.1 Search Space Explosion

Consider a simple join of five tables:

SELECT * FROM A, B, C, D, E
WHERE A.x = B.x AND B.y = C.y AND C.z = D.z AND D.w = E.w;

The number of possible join orderings is:

(2(n1))!(n1)!=8!4!=1680 for n=5\frac{(2(n-1))!}{(n-1)!} = \frac{8!}{4!} = 1680 \text{ for } n = 5

For each ordering, multiple physical implementations exist (nested loop, hash join, merge join). With index choices, the search space grows exponentially.

4.1.2 Multi-Paradigm Complexity

In a unified system, optimization spans paradigms:

SELECT p.title, bm25_score(p.content) as relevance
FROM papers p
WHERE MATCH(p.content) AGAINST ('distributed systems')
  AND vector_similarity(p.embedding, ?) > 0.8
  AND p.author_id IN (
    SELECT vertex_id FROM graph_traverse(:professor, 'ADVISED', 3)
  )
ORDER BY relevance DESC
LIMIT 10;

This query involves:

  • Full-text search (MATCH...AGAINST)
  • Vector similarity (embedding comparison)
  • Graph traversal (advisor relationships)
  • Relational filtering (author constraint)

The optimizer must reason about posting list intersections, index selection across paradigms, and cost trade-offs between different execution strategies.

4.1.3 Optimization Objectives

A query optimizer seeks to minimize a cost function:

minimize C(P)=wioCio(P)+wcpuCcpu(P)+wmemCmem(P)\text{minimize } C(P) = w_{io} \cdot C_{io}(P) + w_{cpu} \cdot C_{cpu}(P) + w_{mem} \cdot C_{mem}(P)

where:

  • Cio(P)C_{io}(P) is I/O cost (disk reads/writes)
  • Ccpu(P)C_{cpu}(P) is CPU cost (operations performed)
  • Cmem(P)C_{mem}(P) is memory cost (working memory required)
  • ww_{*} are weights reflecting system characteristics

Different systems weight these differently: OLTP systems prioritize latency (minimize CioC_{io}), OLAP systems prioritize throughput (minimize CcpuC_{cpu} per row), memory-constrained systems bound CmemC_{mem}.

4.2 Query Space as Complete Lattice

We formalize the space of query plans as a mathematical structure amenable to optimization algorithms.

4.2.1 Logical Query Plans

A logical query plan is a tree of relational operators:

L={Scan,Filter,Project,Join,Aggregate,Sort,Limit,...}\mathcal{L} = \{\text{Scan}, \text{Filter}, \text{Project}, \text{Join}, \text{Aggregate}, \text{Sort}, \text{Limit}, ...\}

Each operator has:

  • Input schema(s)
  • Output schema
  • Semantic meaning (what it computes)

Example logical plan:

Limit(10)
  Sort(relevance DESC)
    Project(title, relevance)
      Filter(vector_sim > 0.8)
        Join(author_id = vertex_id)
          FTSSearch(content, 'distributed systems')
            Scan(papers)
          GraphTraverse(professor, ADVISED, 3)

4.2.2 Physical Query Plans

A physical query plan specifies concrete implementations:

P={SeqScan,IndexScan,NestedLoopJoin,HashJoin,MergeJoin,...}\mathcal{P} = \{\text{SeqScan}, \text{IndexScan}, \text{NestedLoopJoin}, \text{HashJoin}, \text{MergeJoin}, ...\}

Each physical operator has:

  • Implementation algorithm
  • Cost characteristics
  • Resource requirements

Example physical plan:

HeapTopK(10, relevance DESC)
  Project(title, relevance)
    HashJoin(author_id = vertex_id)
      PostingListIntersect
        WANDSearch(content, 'distributed systems', k=1000)
        HNSWSearch(embedding, query_vec, k=1000, threshold=0.8)
      BFSTraversal(professor, ADVISED, max_depth=3)

4.2.3 Plan Equivalence

Two plans are equivalent if they produce identical results for all database states:

P1P2    D:eval(P1,D)=eval(P2,D)P_1 \equiv P_2 \iff \forall D: \text{eval}(P_1, D) = \text{eval}(P_2, D)

Equivalence induces equivalence classes on the plan space.

4.2.4 Partial Ordering by Cost

Define a partial order on plans by estimated cost:

P1P2    C(P1)C(P2)P_1 \preceq P_2 \iff C(P_1) \leq C(P_2)

Within an equivalence class, the optimizer seeks the minimum element under \preceq.

4.2.5 Lattice Structure

Theorem 4.1 (Query Plan Lattice): The space of query plans for a given query, ordered by cost, forms a complete lattice.

Proof sketch:

  • Bottom: The optimal plan (minimum cost)
  • Top: The worst plan (maximum cost)
  • Meet: Given plans P1,P2P_1, P_2, their meet is the cheaper of any common refinement
  • Join: The join is the more expensive plan from which both are reachable by cost-reducing transformations

The lattice structure guarantees that local search algorithms (like dynamic programming) can find global optima under appropriate conditions.

4.2.6 Plan Space Navigation

Optimization algorithms navigate the lattice:

Dynamic Programming (DPccp): Bottom-up construction of optimal subplans, guaranteed to find global optimum for join ordering.

Transformation-Based: Apply equivalence rules to transform plans, hill-climbing toward lower cost.

Randomized: Simulated annealing or genetic algorithms for large search spaces.

Loading diagram...

4.3 Equivalence-Preserving Transformations

Transformations convert one plan to an equivalent plan, enabling search through the plan space.

4.3.1 Selection Pushdown

Rule: Push selections through projections when possible.

σϕ(πA(R))πA(σϕ(R))if attrs(ϕ)A\sigma_\phi(\pi_A(R)) \equiv \pi_A(\sigma_\phi(R)) \quad \text{if } \text{attrs}(\phi) \subseteq A

Benefit: Filter early reduces intermediate result sizes.

Example:

Before: Project(name, price) -> Filter(price > 100) -> Scan(products)
After:  Project(name, price) -> Scan(products) with pushed filter price > 100

4.3.2 Selection Split and Merge

Rule: Conjunctive predicates can split or merge.

σϕ1ϕ2(R)σϕ1(σϕ2(R))\sigma_{\phi_1 \land \phi_2}(R) \equiv \sigma_{\phi_1}(\sigma_{\phi_2}(R))

Benefit: Enables independent optimization of each predicate.

4.3.3 Selection Commutativity

Rule: Independent selections commute.

σϕ1(σϕ2(R))σϕ2(σϕ1(R))\sigma_{\phi_1}(\sigma_{\phi_2}(R)) \equiv \sigma_{\phi_2}(\sigma_{\phi_1}(R))

Benefit: Enables ordering by selectivity (most selective first).

4.3.4 Join Commutativity

Rule: Joins are commutative.

RSSRR \bowtie S \equiv S \bowtie R

Benefit: Choose build/probe sides for hash join based on size.

4.3.5 Join Associativity

Rule: Joins are associative.

(RS)TR(ST)(R \bowtie S) \bowtie T \equiv R \bowtie (S \bowtie T)

Benefit: Enables exploring all join orderings.

4.3.6 Selection-Join Exchange

Rule: Push selection through join.

σϕ(RS)σϕ(R)Sif attrs(ϕ)attrs(R)\sigma_\phi(R \bowtie S) \equiv \sigma_\phi(R) \bowtie S \quad \text{if } \text{attrs}(\phi) \subseteq \text{attrs}(R) σϕ(RS)Rσϕ(S)if attrs(ϕ)attrs(S)\sigma_\phi(R \bowtie S) \equiv R \bowtie \sigma_\phi(S) \quad \text{if } \text{attrs}(\phi) \subseteq \text{attrs}(S)

Benefit: Reduces join input sizes.

4.3.7 Projection Pushdown

Rule: Push projection through join.

πA(RθS)πA(πARJ(R)θπASJ(S))\pi_A(R \bowtie_\theta S) \equiv \pi_A(\pi_{A_R \cup J}(R) \bowtie_\theta \pi_{A_S \cup J}(S))

where JJ = join attributes, ARA_R = Aattrs(R)A \cap \text{attrs}(R), ASA_S = Aattrs(S)A \cap \text{attrs}(S).

Benefit: Reduces data width in intermediate results.

4.3.8 Cross-Paradigm Transformations

Unified systems enable novel transformations:

FTS-Filter Interchange:

σϕ(FTS(t,R))FTS(t,σϕ(R))when ϕ independent of FTS\sigma_\phi(\text{FTS}(t, R)) \equiv \text{FTS}(t, \sigma_\phi(R)) \quad \text{when } \phi \text{ independent of FTS}

Vector-Filter Interchange:

σϕ(kNN(v,k,R))kNN(v,k,σϕ(R))with adjusted k\sigma_\phi(\text{kNN}(v, k, R)) \equiv \text{kNN}(v, k', \sigma_\phi(R)) \quad \text{with adjusted } k'

Note: Vector search may require k>kk' > k to account for filtered-out results.

Graph-Relational Interchange:

GraphTraverse(v,t,k)σϕ(R)FilteredTraverse(v,t,k,ϕ)\text{GraphTraverse}(v, t, k) \cap \sigma_\phi(R) \equiv \text{FilteredTraverse}(v, t, k, \phi)

When filtering can push into traversal.

4.4 Cost Model Fundamentals

Cost models estimate execution cost without actually running queries.

4.4.1 I/O Cost Model

Sequential Read Cost:

Cseq(n)=ncseqC_{seq}(n) = n \cdot c_{seq}

where nn = pages read, cseqc_{seq} = cost per sequential page read.

Random Read Cost:

Crand(n)=ncrandC_{rand}(n) = n \cdot c_{rand}

where crandcseqc_{rand} \gg c_{seq} (typically 10-100x for HDD, 2-10x for SSD).

Index Scan Cost:

Cidx(R,ϕ)=height(I)crand+sel(ϕ)RcrandC_{idx}(R, \phi) = \text{height}(I) \cdot c_{rand} + \text{sel}(\phi) \cdot |R| \cdot c_{rand}

Index lookup plus data page fetches for matching rows.

Sequential Scan Cost:

Cscan(R)=pages(R)cseqC_{scan}(R) = \text{pages}(R) \cdot c_{seq}

4.4.2 CPU Cost Model

Filter Cost:

Cfilter(ϕ,n)=nccomp(ϕ)C_{filter}(\phi, n) = n \cdot c_{comp}(\phi)

where ccomp(ϕ)c_{comp}(\phi) depends on predicate complexity.

Hash Table Build Cost:

Cbuild(R)=RchashC_{build}(R) = |R| \cdot c_{hash}

Hash Probe Cost:

Cprobe(S,R)=ScprobeC_{probe}(S, R) = |S| \cdot c_{probe}

Sort Cost:

Csort(R)=Rlog(R)ccmpC_{sort}(R) = |R| \cdot \log(|R|) \cdot c_{cmp}

4.4.3 Memory Cost Model

Hash Join Memory:

Mhash(R)=Rrow_size(R)(1+overhead)M_{hash}(R) = |R| \cdot \text{row\_size}(R) \cdot (1 + \text{overhead})

If Mhash(R)>MavailableM_{hash}(R) > M_{available}, spill to disk.

Sort Memory:

Msort(R)=min(Rrow_size(R),Mavailable)M_{sort}(R) = \min(|R| \cdot \text{row\_size}(R), M_{available})

External sort required when input exceeds memory.

4.4.4 Join Cost Models

Nested Loop Join:

CNLJ(R,S)=Rcouter+RScinnerC_{NLJ}(R, S) = |R| \cdot c_{outer} + |R| \cdot |S| \cdot c_{inner}

Hash Join:

CHJ(R,S)=Rcbuild+Scprobe+RScoutputC_{HJ}(R, S) = |R| \cdot c_{build} + |S| \cdot c_{probe} + |R \bowtie S| \cdot c_{output}

Merge Join (sorted inputs):

CMJ(R,S)=Rcmerge+ScmergeC_{MJ}(R, S) = |R| \cdot c_{merge} + |S| \cdot c_{merge}

Decision Criterion:

  • Use NLJ when R|R| small or inner indexed
  • Use HJ when memory sufficient for build side
  • Use MJ when inputs pre-sorted or sort cost amortized

4.4.5 Posting List Operation Costs

Intersection Cost:

C(P1,P2)=min(P1,P2)cseek+min(P1,P2)ccmpC_\cap(P_1, P_2) = \min(|P_1|, |P_2|) \cdot c_{seek} + \min(|P_1|, |P_2|) \cdot c_{cmp}

With skip pointers, seeks skip past non-matching regions.

Union Cost:

C(P1,P2)=(P1+P2)cmergeC_\cup(P_1, P_2) = (|P_1| + |P_2|) \cdot c_{merge}

Linear merge of sorted lists.

FTS Scoring Cost:

CBM25(P,q)=PqcscoreC_{BM25}(P, q) = |P| \cdot |q| \cdot c_{score}

Score computation for each document-term pair.

4.4.6 Vector Search Cost

HNSW Search Cost:

CHNSW(k,ef)=eflog(N)cdistC_{HNSW}(k, ef) = ef \cdot \log(N) \cdot c_{dist}

where efef = search width, NN = index size, cdistc_{dist} = distance computation cost.

Brute Force Cost:

Cbrute(N,d)=NdcmulC_{brute}(N, d) = N \cdot d \cdot c_{mul}

where dd = vector dimension.

4.4.7 Graph Traversal Cost

BFS Cost:

CBFS(k,davg)=i=1kdavgicexpandC_{BFS}(k, d_{avg}) = \sum_{i=1}^{k} d_{avg}^i \cdot c_{expand}

where kk = depth, davgd_{avg} = average degree.

With Filter:

Cfiltered_BFS(k,davg,sel)=i=1k(davgsel)icexpandC_{filtered\_BFS}(k, d_{avg}, sel) = \sum_{i=1}^{k} (d_{avg} \cdot sel)^i \cdot c_{expand}

Filter selectivity reduces effective degree at each hop.

4.5 Cardinality Estimation

Accurate cardinality estimation is critical for cost-based optimization.

4.5.1 Base Table Statistics

For each table RR, maintain:

  • R|R|: Row count
  • V(A,R)V(A, R): Distinct values for attribute AA
  • min(A,R)\min(A, R), max(A,R)\max(A, R): Value range
  • hist(A,R)\text{hist}(A, R): Value histogram

4.5.2 Single Predicate Selectivity

Equality: σA=v(R)\sigma_{A=v}(R)

sel(A=v)=1V(A,R)\text{sel}(A = v) = \frac{1}{V(A, R)}

Range: σA>v(R)\sigma_{A > v}(R)

sel(A>v)=max(A,R)vmax(A,R)min(A,R)\text{sel}(A > v) = \frac{\max(A, R) - v}{\max(A, R) - \min(A, R)}

With histogram: Use histogram bucket frequencies for more accurate estimation.

4.5.3 Compound Predicate Selectivity

Independence Assumption:

sel(ϕ1ϕ2)=sel(ϕ1)sel(ϕ2)\text{sel}(\phi_1 \land \phi_2) = \text{sel}(\phi_1) \cdot \text{sel}(\phi_2)

With Correlation:

sel(ϕ1ϕ2)=sel(ϕ1)sel(ϕ2)corr(ϕ1,ϕ2)\text{sel}(\phi_1 \land \phi_2) = \text{sel}(\phi_1) \cdot \text{sel}(\phi_2) \cdot \text{corr}(\phi_1, \phi_2)

where corr(ϕ1,ϕ2)\text{corr}(\phi_1, \phi_2) captures dependency (1 = independent, >1 = positive correlation, <1 = negative correlation).

4.5.4 Join Cardinality

Foreign Key Join:

RR.fk=S.pkS=R|R \bowtie_{R.fk = S.pk} S| = |R|

Each row in RR matches exactly one row in SS.

General Equijoin:

RR.A=S.BS=RSmax(V(A,R),V(B,S))|R \bowtie_{R.A = S.B} S| = \frac{|R| \cdot |S|}{\max(V(A, R), V(B, S))}

Assuming uniform distribution.

With Statistics:

RR.A=S.BS=vfreq(v,R.A)freq(v,S.B)|R \bowtie_{R.A = S.B} S| = \sum_{v} \text{freq}(v, R.A) \cdot \text{freq}(v, S.B)

4.5.5 FTS Cardinality

Term Query:

τt=df(t)|\tau_t| = \text{df}(t)

Document frequency from inverted index.

Conjunction:

τt1τt2df(t1)df(t2)N|\tau_{t_1} \cap \tau_{t_2}| \approx \frac{\text{df}(t_1) \cdot \text{df}(t_2)}{N}

Independence assumption.

With Co-occurrence Statistics:

τt1τt2=codf(t1,t2)|\tau_{t_1} \cap \tau_{t_2}| = \text{codf}(t_1, t_2)

Pre-computed co-occurrence counts.

4.5.6 Vector Search Cardinality

k-NN Search:

kNN(q,k)=k|\text{kNN}(q, k)| = k

By definition.

Threshold Search:

sim(q,)>θNP(sim>θ)|\text{sim}(q, \cdot) > \theta| \approx N \cdot P(\text{sim} > \theta)

Requires distribution model for similarity scores.

4.5.7 Graph Traversal Cardinality

k-Hop Traversal:

τG(v,T,k)min(N,davgk)|\tau_G(v, T, k)| \approx \min(N, d_{avg}^k)

Exponential growth bounded by graph size.

With Selectivity:

τG(v,T,k)σϕmin(Nsel(ϕ),davgksel(ϕ)k)|\tau_G(v, T, k) \cap \sigma_\phi| \approx \min(N \cdot \text{sel}(\phi), d_{avg}^k \cdot \text{sel}(\phi)^k)

Filter applies at each hop.

4.6 Selectivity Estimation Techniques

4.6.1 Histograms

Equi-width Histogram: Divide value range into equal-width buckets.

sel(A[l,u])=b:b[l,u]b[l,u]bfreq(b)\text{sel}(A \in [l, u]) = \sum_{b: b \cap [l, u] \neq \emptyset} \frac{|b \cap [l, u]|}{|b|} \cdot \text{freq}(b)

Equi-depth Histogram: Each bucket contains equal number of values.

Better for skewed distributions.

Compressed Histogram: Store frequent values separately, histogram for remainder.

4.6.2 Sampling

Random Sampling: Estimate selectivity from sample.

sel(ϕ)σϕ(sample)sample\text{sel}(\phi) \approx \frac{|\sigma_\phi(\text{sample})|}{|\text{sample}|}

Confidence Interval:

sel(ϕ)[p^zp^(1p^)n,p^+zp^(1p^)n]\text{sel}(\phi) \in \left[\hat{p} - z \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}, \hat{p} + z \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}\right]

4.6.3 Sketches

Count-Min Sketch: Estimate frequency of items.

Space: O(1ϵlog1δ)O(\frac{1}{\epsilon} \log \frac{1}{\delta}) for (1+ϵ)(1+\epsilon) approximation with probability 1δ1-\delta.

HyperLogLog: Estimate cardinality (distinct count).

Space: O(loglogN)O(\log \log N) for relative error 1.04m\frac{1.04}{\sqrt{m}}.

4.6.4 Machine Learning Approaches

Query-Driven Learning: Train models on query workload.

sel(ϕ)=fθ(features(ϕ))\text{sel}(\phi) = f_\theta(\text{features}(\phi))

Features include: predicate type, column statistics, query structure.

Learned Cardinality Estimation: Neural networks trained on actual cardinalities.

4.7 The DPccp Algorithm

Dynamic Programming for Connected Subgraph Complement Pairs (DPccp) is the standard algorithm for optimal join ordering.

4.7.1 Problem Formulation

Given relations R1,...,RnR_1, ..., R_n and join graph G=(V,E)G = (V, E) where vertices are relations and edges are join predicates:

Goal: Find minimum-cost join tree.

4.7.2 Algorithm

Algorithm: DPccp(R_1, ..., R_n, G)
  // Initialize single relations
  for i in 1..n:
    opt[{R_i}] = AccessPath(R_i)

  // Enumerate subgraph complement pairs
  for S in subsets(R_1, ..., R_n) ordered by size:
    for (S_1, S_2) in ccp_pairs(S, G):
      for each join algorithm J:
        cost = C(J, opt[S_1], opt[S_2])
        if cost < opt[S].cost:
          opt[S] = (J, opt[S_1], opt[S_2], cost)

  return opt[{R_1, ..., R_n}]

ccp_pairs(S, G): Enumerate pairs (S1,S2)(S_1, S_2) where S1S2=SS_1 \cup S_2 = S, S1S2=S_1 \cap S_2 = \emptyset, and both S1S_1 and S2S_2 are connected in GG.

4.7.3 Complexity

  • Subsets: O(2n)O(2^n)
  • CCP pairs per subset: O(3n/2n)O(3^n / 2^n) on average
  • Total: O(3n)O(3^n)

For large nn, heuristics or randomized algorithms necessary.

4.7.4 Extensions for Unified Queries

DPccp extends to multi-paradigm queries by:

  1. Treating FTS/Vector/Graph operations as "virtual relations"
  2. Modeling posting list intersections as joins
  3. Including paradigm-specific cost models

4.8 Category-Theoretic Perspective

For readers with category theory background, we sketch the categorical view of query optimization.

4.8.1 Category of Queries

Define category Query\mathbf{Query}:

  • Objects: Query plans (logical or physical)
  • Morphisms: Equivalence-preserving transformations

This forms a groupoid (every morphism is invertible) since transformations preserve equivalence.

4.8.2 Cost Functor

The cost function defines a functor:

C:QueryR+C: \mathbf{Query} \rightarrow \mathbf{R}^+

mapping query plans to their estimated costs.

Functoriality: Transformations that preserve equivalence should have predictable cost effects:

C(T(Q))=C(Q)+Δ(T)C(T(Q)) = C(Q) + \Delta(T)

where Δ(T)\Delta(T) is the cost delta of transformation TT.

4.8.3 Natural Transformations as Optimizations

An optimization strategy is a natural transformation:

η:IdQueryOpt\eta: \text{Id}_{\mathbf{Query}} \Rightarrow \text{Opt}

where Opt\text{Opt} is the "optimized plan" functor.

Naturality ensures optimization is consistent across equivalent query formulations.

4.8.4 Adjunctions in Query Processing

Free-Forgetful Adjunction:

F:SQLPlan:UF: \mathbf{SQL} \rightleftharpoons \mathbf{Plan} : U
  • FF (free functor): Compile SQL to logical plan
  • UU (forgetful functor): Extract SQL semantics from plan

The adjunction captures that plans are "free algebras" over SQL expressions.

Embedding-Projection Adjunction:

E:LogicalPhysical:PE: \mathbf{Logical} \rightleftharpoons \mathbf{Physical} : P
  • EE (embed): Logical plan as abstract physical plan
  • PP (project): Physical plan's logical semantics

This adjunction formalizes the relationship between logical and physical planning.

4.8.5 Monad of Query Execution

Query execution forms a monad:

Exec:PlanPlan\text{Exec}: \mathbf{Plan} \rightarrow \mathbf{Plan}

with:

  • ηP:PExec(P)\eta_P: P \rightarrow \text{Exec}(P) (plan becomes executable)
  • μP:Exec(Exec(P))Exec(P)\mu_P: \text{Exec}(\text{Exec}(P)) \rightarrow \text{Exec}(P) (flatten nested execution)

The monad laws ensure consistent execution semantics.

4.9 Practical Optimization Architecture

4.9.1 Two-Phase Optimization

Phase 1: Logical Optimization

  • Apply transformation rules
  • Generate alternative logical plans
  • Prune obviously suboptimal plans

Phase 2: Physical Optimization

  • Select physical operators
  • Choose access paths
  • Determine join algorithms
Loading diagram...

4.9.2 Rule-Based Optimization

Apply transformation rules in priority order:

  1. Predicate simplification: Constant folding, contradiction detection
  2. Predicate pushdown: Move filters toward data sources
  3. Projection pruning: Remove unused columns
  4. Join elimination: Remove unnecessary joins (e.g., to unique key)
  5. Subquery unnesting: Convert correlated subqueries to joins

4.9.3 Cost-Based Optimization

After rule-based transformations:

  1. Enumerate join orderings (DPccp or heuristic)
  2. For each ordering, select physical operators
  3. Estimate cost of each plan
  4. Select minimum-cost plan

4.9.4 Adaptive Optimization

Modern optimizers adapt during execution:

Adaptive Join Selection: Switch join algorithm based on runtime cardinalities.

Adaptive Parallelism: Adjust parallelism based on observed throughput.

Re-optimization: Re-plan query mid-execution if estimates badly wrong.

4.10 Cross-Paradigm Optimization

4.10.1 Unified Cost Model

Cross-paradigm optimization requires a unified cost model:

Ctotal=Crelational+CFTS+Cvector+Cgraph+CintegrationC_{total} = C_{relational} + C_{FTS} + C_{vector} + C_{graph} + C_{integration}

The integration cost captures posting list operations that combine paradigm results.

4.10.2 Paradigm Selection

Given a predicate expressible in multiple paradigms, choose the cheapest:

Example: Find documents where category = 'electronics'

Options:

  1. Relational: Secondary index scan
  2. FTS: Term query on category field
  3. Graph: Label-based vertex filter

Cost comparison:

Crel=height(idx)crand+σcrandC_{rel} = \text{height}(idx) \cdot c_{rand} + |\sigma| \cdot c_{rand} CFTS=cterm_lookup+PelectronicscdecodeC_{FTS} = c_{term\_lookup} + |P_{electronics}| \cdot c_{decode} Cgraph=clabel_lookup+VelectronicscdecodeC_{graph} = c_{label\_lookup} + |V_{electronics}| \cdot c_{decode}

Select minimum.

4.10.3 Intersection Ordering

For conjunctive queries across paradigms:

WHERE MATCH(content) AGAINST ('database')  -- FTS
  AND category = 'tech'                     -- Relational
  AND vector_sim(embedding, ?) > 0.8        -- Vector

Order intersections by:

  1. Selectivity (most selective first)
  2. Evaluation cost (cheapest first, tie-breaker)

If FTS selectivity = 0.01, relational selectivity = 0.1, vector selectivity = 0.05:

Order: FTS (0.01) -> Vector (0.05) -> Relational (0.1)

4.10.4 Materialization Points

Decide where to materialize intermediate results:

Eager Materialization: Materialize after each paradigm operation.

  • Pro: Simple, predictable memory
  • Con: May materialize large intermediate results

Lazy Materialization: Defer materialization until necessary.

  • Pro: Avoids unnecessary work
  • Con: Complex planning, potential repeated evaluation

Hybrid: Materialize based on estimated intermediate sizes.

4.11 Summary

This chapter established the theoretical foundations for query optimization:

  1. Query space as lattice formalizes the search space with partial ordering by cost, enabling systematic exploration toward optimal plans

  2. Equivalence-preserving transformations include classical rules (selection pushdown, join reordering) extended with cross-paradigm transformations for unified systems

  3. Cost models span I/O, CPU, and memory costs for relational, FTS, vector, and graph operations, enabling unified cost estimation

  4. Selectivity estimation uses histograms, sampling, and sketches for cardinality prediction, critical for accurate cost estimation

  5. DPccp algorithm provides optimal join ordering through dynamic programming over connected subgraph complement pairs

  6. Category-theoretic perspective reveals query optimization as navigation through a category of plans with cost as a functor and optimization strategies as natural transformations

  7. Cross-paradigm optimization requires unified cost models, paradigm selection, intersection ordering, and materialization decisions

The following chapters (Part II) examine the storage engine that underlies all these operations, starting with LSM-tree architecture in Chapter 5.

Copyright (c) 2023-2026 Cognica, Inc.