Skip to content

Benchmarks

This section presents detailed evaluations of VectorSearch across multiple benchmark scenarios, highlighting performance within static and dynamic indexing setups, as well as ablation studies on system design components.

We evaluate the system using standard datasets such as Glove1.2M, SIFT10M, Deep1M, and News, and compare to state-of-the-art baselines including FAISS, HNSWlib, LVQ, and Milvus.


Sections

  • Static Indexing:
    Evaluates precision, memory usage, and query latency on pre-built indexes using static datasets.

  • Dynamic Indexing:
    Measures update latency, scalability, and post-insertion accuracy for real-time index updates without reconstruction.

  • Ablation Studies:
    Evaluates the impact of quantization, graph reranking, and memory optimization strategies on system performance.


๐Ÿ“Š Benchmark Summary (Across All Tasks)

Method Dataset Precision@10 Recall@10 Latency (ms) Memory (GiB)
FAISS-IVFPQ Glove1.2M 0.78 0.71 12 1.2
HNSWlib Glove1.2M 0.85 0.79 20 2.6
LVQ [VLDB'23] Deep1M 0.88 0.84 15 1.8
VectorSearch (Ours) All 0.92 0.87 11 1.0

Full tables and visual comparisons are included in each section linked above.