openai-cookbook/examples/vector_databases/neon/README.md
2023-09-29 18:22:39 -07:00

3.0 KiB

What is Neon?

Neon is Serverless Postgres built for the cloud. Neon separates compute and storage to offer modern developer features such as autoscaling, database branching, scale-to-zero, and more.

Neon supports vector search using the pgvector open-source PostgreSQL extension, which enables Postgres as a vector database for storing and querying embeddings.

OpenAI cookbook notebook

Check out the notebook in this repo for working with Neon Serverless Postgres as your vector database.

Semantic search using Neon Postgres with pgvector and OpenAI

In this notebook you will learn how to:

  1. Use embeddings created by OpenAI API
  2. Store embeddings in a Neon Serverless Postgres database
  3. Convert a raw text query to an embedding with OpenAI API
  4. Use Neon with the pgvector extension to perform vector similarity search

Scaling Support

Neon enables you to scale your AI applications with the following features:

  • Autoscaling: If your AI application experiences heavy load during certain hours of the day or at different times, Neon can automatically scale compute resources without manual intervention. During periods of inactivity, Neon is able to scale to zero.
  • Instant read replicas: Neon supports instant read replicas, which are independent read-only compute instances designed to perform read operations on the same data as your read-write computes. With read replicas, you can offload reads from your read-write compute instance to a dedicated read-only compute instance for your AI application.
  • The Neon serverless driver: Neon supports a low-latency serverless PostgreSQL driver for JavaScript and TypeScript applications that allows you to query data from serverless and edge environments, making it possible to achieve sub-10ms queries.

More Examples

Additional Resources