← Back to Topics

Topic 2: NoSQL – Vector Databases for AI Applications

Vector Embeddings, Similarity Search, and Milvus

Topic Overview

Vector databases specialize in similarity search over high-dimensional embeddings, enabling semantic queries that traditional databases cannot efficiently support. Unlike exact-match databases, vector databases use approximate nearest neighbor (ANN) algorithms to find similar vectors based on distance metrics (cosine, L2, inner product) with sub-linear query complexity. This addresses the curse of dimensionality where exact search becomes computationally infeasible for embeddings with hundreds of dimensions. Vector databases are essential for retrieval-augmented generation (RAG), semantic search, and recommendation systems where similarity in embedding space corresponds to semantic or behavioral similarity. The Milvus architecture demonstrates how distributed vector databases manage indexing, storage, and query execution at scale. Students must evaluate when vector databases are necessary versus when traditional databases with full-text search or simpler similarity measures suffice.

Student Presentation Assignments

Student 1:Vector Databases Fundamentals

Required Coverage:

  • Must explain embeddings and vector similarity search, specifying the mathematical foundation and how semantic meaning maps to vector space
  • Must analyze why traditional databases fail for semantic search, demonstrating with query examples why exact-match indexes are insufficient
  • Must compare distance metrics (cosine similarity, L2, inner product), explaining when each is appropriate and how choice affects results
  • Must justify why approximate nearest neighbor (ANN) is necessary, analyzing computational complexity of exact search in high dimensions
  • Must evaluate at least two AI-driven use cases (e.g., RAG, recommendation systems), explaining why vector similarity is necessary versus alternatives
  • Must discuss trade-offs: precision vs recall in ANN search, latency vs accuracy, and when approximate results are acceptable

Student 2:Milvus Architecture

Required Coverage:

  • Must explain Milvus components (QueryNode, DataNode, IndexNode), specifying responsibilities and how they coordinate for queries
  • Must analyze storage layers: how object storage and metadata management differ from traditional database storage
  • Must explain at least two indexing methods (IVF, HNSW, or PQ), comparing their trade-offs in build time, query latency, and recall
  • Must explain scalability and distributed execution model, identifying bottlenecks and how Milvus handles horizontal scaling
  • Must compare Milvus with at least two alternatives (Pinecone, Weaviate, or FAISS), analyzing architectural differences and when each is preferable
  • Must evaluate architectural trade-offs: managed vs self-hosted, consistency guarantees, and operational complexity

Student 3:Using Milvus in Applications

Required Coverage:

  • Must explain data ingestion and schema design for vector data, specifying how to structure collections and handle metadata
  • Must analyze index creation and tuning, explaining how parameter selection (e.g., HNSW M, ef_construction) affects build time and query performance
  • Must compare query patterns for AI applications, analyzing when batch processing is preferable to real-time queries
  • Must justify index type selection based on latency vs recall trade-offs, providing quantitative guidance where available
  • Must explain integration patterns with LLM pipelines (e.g., RAG architectures), specifying how vector search fits into the data flow and analyzing system design trade-offs, not API usage
  • Must identify at least three common patterns and anti-patterns, explaining why certain approaches work or fail

Student 4:Production Challenges & Best Practices

Required Coverage:

  • Must analyze data freshness and re-indexing strategies, explaining trade-offs between incremental updates and full rebuilds
  • Must evaluate cost and memory considerations, quantifying resource requirements for different index types and data volumes
  • Must explain security and multi-tenancy in vector databases, comparing isolation strategies and their limitations
  • Must analyze monitoring and failure handling, specifying key metrics and how to detect and recover from failures
  • Must evaluate at least one real-world case study, explaining production challenges encountered and solutions implemented
  • Must identify scenarios where vector databases are not appropriate, justifying when simpler alternatives suffice

Presentation Requirements

All presentations must be 17–20 minutes in duration and include the following components:

Note: Presentations that only summarize definitions, list features, or copy diagrams without interpretation will receive low marks. Each presentation must demonstrate analytical reasoning through comparisons, trade-off analysis, and justification of design decisions. Reading slides verbatim or presenting material that could be satisfied by reading documentation will be penalized.

Report Requirement: In addition to the presentation, each student must submit an individual PDF report. See Seminar Report Requirements for format, content, and submission details.

Evaluation Criteria

Criterion Weight Description
Technical Correctness 30% Accuracy of technical content, correct use of terminology, absence of errors
Depth of Understanding 25% Goes beyond surface-level definitions, demonstrates system-level comprehension
Clarity and Structure 20% Logical flow, clear explanations, appropriate use of examples and visuals
Use of Examples and Trade-offs 15% Concrete examples, discussion of limitations, comparison with alternatives
Slide Quality and Time Management 10% Professional formatting, appropriate pacing, stays within time limit

Recommended References

Books:

Documentation:

Academic / Technical: