← Back to all posts
Tutorial PostgreSQL

PostgreSQL Extensions: Supercharge Your Database

By Tech TeamEngineeringFebruary 6, 202618 min read

One of the reasons PostgreSQL has become the go-to database for so many teams is its extensibility. Unlike databases that force you to work within rigid boundaries, Postgres lets you bolt on new capabilities through extensions. Need to store geographic data? There's an extension for that. Want to run machine learning models directly in your database? Extension. Full-text search with fuzzy matching? You guessed it.

Let's explore what extensions are, how to use them, and which ones you should know about.

What Are PostgreSQL Extensions?

An extension is a packaged bundle of SQL objects—functions, data types, operators, index types—that adds new capabilities to your PostgreSQL database. Think of them as plugins that extend what Postgres can do without modifying its core.

Extensions can add:

  • New data types (like geometric shapes, vectors, or JSON with better operators)
  • Functions (cryptographic hashing, statistical analysis, string manipulation)
  • Index methods (for specialized searches like full-text or nearest-neighbor)
  • Foreign data wrappers (to query external data sources as if they were local tables)
  • Background workers (for scheduled jobs or continuous processing)

The beauty of extensions is that they're first-class citizens in PostgreSQL. They integrate seamlessly with the query planner, ACID transactions, and all the reliability guarantees you expect.

Installing Extensions

Step 1: Check What's Available

Before you can use an extension, it needs to be installed on your PostgreSQL server. Most common extensions come bundled with standard PostgreSQL distributions. To see what's available:

SELECT * FROM pg_available_extensions ORDER BY name;

This shows all extensions your server can load. You'll typically see dozens of options, from btree_gist to uuid-ossp .

Step 2: Create the Extension

Once you've confirmed an extension is available, enable it in your database:

CREATE EXTENSION IF NOT EXISTS extension_name;

The IF NOT EXISTS clause prevents errors if the extension is already installed — handy for migration scripts.

Some extensions require superuser privileges, while others can be installed by database owners. If you get a permission error, you may need to ask your DBA or use a managed database's extension panel.

Step 3: Verify Installation

Check that your extension is active:

SELECT * FROM pg_extension WHERE extname = 'extension_name';

Or list all installed extensions if you are using the postgres cli:

\dx

Installing Extensions Not Bundled with PostgreSQL

Some powerful extensions (like PostGIS or pgvector) aren't included in the default PostgreSQL package. You'll need to install them at the system level first.

On Ubuntu/Debian:

sudo apt install postgresql-16-postgis-3
sudo apt install postgresql-16-pgvector

On macOS with Homebrew:

brew install postgis
brew install pgvector

On managed databases (AWS RDS, Supabase, Neon, etc.), check the provider's documentation. Most popular extensions are pre-installed—you just need to run CREATE EXTENSION .

Version compatibility matters. Extensions are compiled against specific PostgreSQL versions. When upgrading PostgreSQL, make sure compatible versions of your extensions exist for the new version before migrating.

Let's dive into the extensions that developers and DBAs reach for most often. These aren't obscure tools—they're battle-tested in production systems worldwide.

PostGIS — Geospatial Powerhouse

If you're working with location data, PostGIS is essential. It transforms PostgreSQL into a fully-featured geographic information system (GIS) that rivals specialized spatial databases.

What it adds:

  • Geometry and geography data types
  • Spatial indexing (R-tree via GiST)
  • Hundreds of spatial functions
  • Support for standard formats (WKT, WKB, GeoJSON)

Example: Find all restaurants within 1km of a point

CREATE EXTENSION IF NOT EXISTS postgis;

-- Create a table with a geography column
CREATE TABLE restaurants (
    id SERIAL PRIMARY KEY,
    name TEXT,
    location GEOGRAPHY(POINT, 4326)
);

-- Insert some data (longitude, latitude)
INSERT INTO restaurants (name, location) VALUES
    ('Pizza Place', ST_Point(-122.4194, 37.7749)),
    ('Sushi Spot', ST_Point(-122.4089, 37.7851)),
    ('Taco Shop', ST_Point(-122.4000, 37.7900));

-- Find restaurants within 1km of Union Square
SELECT name, ST_Distance(
    location,
    ST_Point(-122.4075, 37.7881)::geography
) AS distance_meters
FROM restaurants
WHERE ST_DWithin(
    location,
    ST_Point(-122.4075, 37.7881)::geography,
    1000  -- 1000 meters
)
ORDER BY distance_meters;

PostGIS also handles complex operations like polygon intersections, route calculations, and coordinate system transformations.


With the rise of AI applications, pgvector has become one of the fastest-growing extensions. It lets you store and query vector embeddings directly in PostgreSQL—no separate vector database needed.

What it adds:

  • vector data type for storing embeddings
  • Distance operators (L2, inner product, cosine)
  • Approximate nearest neighbor (ANN) indexes

Example: Semantic search with embeddings

CREATE EXTENSION IF NOT EXISTS vector;

-- Store documents with their embeddings
CREATE TABLE documents (
    id SERIAL PRIMARY KEY,
    content TEXT,
    embedding vector(1536)  -- OpenAI ada-002 dimension
);

-- Create an index for fast similarity search
CREATE INDEX ON documents USING ivfflat (embedding vector_cosine_ops)
WITH (lists = 100);

-- Find the 5 most similar documents to a query embedding
SELECT content, embedding <=> '[0.1, 0.2, ...]'::vector AS distance
FROM documents
ORDER BY embedding <=> '[0.1, 0.2, ...]'::vector
LIMIT 5;

The <=> operator calculates cosine distance. pgvector also supports <-> for L2 distance and <#> for inner product.

Why use pgvector over a dedicated vector database? Simplicity. Your vectors live alongside your relational data, so you can join them with user tables, filter by metadata, and wrap everything in transactions. One less service to manage.


pg_stat_statements — Query Performance Insights

Every production PostgreSQL database should have this extension enabled. It tracks statistics about every SQL statement executed, helping you find slow queries and optimization opportunities.

What it tracks:

  • Execution count and total/average time
  • Rows returned or affected
  • Buffer usage (shared, local, temp)
  • WAL generation

Example: Find your slowest queries

CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- Top 10 queries by total execution time
SELECT
    substring(query, 1, 80) AS query_preview,
    calls,
    round(total_exec_time::numeric, 2) AS total_ms,
    round(mean_exec_time::numeric, 2) AS avg_ms,
    rows
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;

Example: Find queries that scan the most data

SELECT
    substring(query, 1, 80) AS query_preview,
    calls,
    shared_blks_read + shared_blks_hit AS total_blocks,
    round(100.0 * shared_blks_hit /
        NULLIF(shared_blks_read + shared_blks_hit, 0), 2) AS cache_hit_pct
FROM pg_stat_statements
WHERE calls > 100
ORDER BY shared_blks_read DESC
LIMIT 10;

Queries with low cache hit percentages are reading from disk frequently—prime candidates for index optimization.


pgcrypto — Encryption and Hashing

When you need to hash passwords, encrypt sensitive data, or generate secure random values, pgcrypto has you covered.

Key functions:

  • crypt() and gen_salt() for password hashing
  • encrypt() and decrypt() for symmetric encryption
  • pgp_sym_encrypt() for PGP encryption
  • gen_random_bytes() and gen_random_uuid() for secure randomness

Example: Secure password storage

CREATE EXTENSION IF NOT EXISTS pgcrypto;

-- Hash a password with bcrypt
INSERT INTO users (email, password_hash)
VALUES (
    'user@example.com',
    crypt('user_password', gen_salt('bf', 10))
);

-- Verify a password
SELECT id, email
FROM users
WHERE email = 'user@example.com'
  AND password_hash = crypt('attempted_password', password_hash);

The bf algorithm is bcrypt with a cost factor of 10 (2^10 iterations). Increase the cost factor for more security at the expense of CPU time.

Don't store encryption keys in the database. If someone gets access to your database, they shouldn't also get the keys to decrypt sensitive data. Use environment variables or a secrets manager for keys.


postgres_fdw — Query Remote Databases

Foreign Data Wrappers (FDW) let you access external data sources as if they were local tables. postgres_fdw specifically connects to other PostgreSQL databases.

Example: Query a remote database

CREATE EXTENSION IF NOT EXISTS postgres_fdw;

-- Define the remote server
CREATE SERVER remote_db
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host 'remote.example.com', port '5432', dbname 'analytics');

-- Map your local user to a remote user
CREATE USER MAPPING FOR current_user
SERVER remote_db
OPTIONS (user 'analyst', password 'secret');

-- Import tables from the remote schema
IMPORT FOREIGN SCHEMA public
FROM SERVER remote_db
INTO remote_tables;

-- Now query as if local!
SELECT * FROM remote_tables.events
WHERE created_at > NOW() - INTERVAL '1 day';

You can even join local and remote tables in a single query. The query planner is smart enough to push filters down to the remote server when possible.


uuid-ossp — UUID Generation

UUIDs are everywhere in modern applications. This extension provides functions to generate various UUID versions.

CREATE EXTENSION IF NOT EXISTS "uuid-ossp";

-- Generate a random UUID (v4)
SELECT uuid_generate_v4();
-- Result: 550e8400-e29b-41d4-a716-446655440000

-- Use as a default for primary keys
CREATE TABLE orders (
    id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
    customer_id UUID,
    total DECIMAL(10, 2)
);

PostgreSQL 13+ has gen_random_uuid() built-in without any extension. If you only need random UUIDs (v4), you don't need uuid-ossp anymore. The extension is still useful for other UUID versions (v1, v3, v5).


pg_trgm — Fuzzy Text Matching

Need to find "PostgreSQL" when someone types "postgre" or "postgressql"? The trigram extension enables fuzzy string matching and similarity searches.

What it adds:

  • Similarity functions ( similarity() , word_similarity() )
  • Similarity operators ( % , <% , %> )
  • GIN and GiST index support for fast fuzzy searches

Example: Autocomplete with typo tolerance

CREATE EXTENSION IF NOT EXISTS pg_trgm;

-- Create an index for fast similarity searches
CREATE INDEX trgm_idx ON products USING gin (name gin_trgm_ops);

-- Find products similar to a search term
SELECT name, similarity(name, 'choclate') AS sim
FROM products
WHERE name % 'choclate'  -- % means similarity > 0.3
ORDER BY sim DESC
LIMIT 10;

This would find "Chocolate Bar", "Chocolate Cake", etc., even with the misspelling.


hstore — Key-Value Storage

Before JSONB became popular, hstore was the go-to for storing semi-structured data. It's still useful for simple key-value pairs where you don't need nested structures.

CREATE EXTENSION IF NOT EXISTS hstore;

-- Store attributes as key-value pairs
CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    name TEXT,
    attributes hstore
);

INSERT INTO products (name, attributes) VALUES
    ('T-Shirt', 'color => "red", size => "M", material => "cotton"'),
    ('Laptop', 'brand => "Dell", ram => "16GB", storage => "512GB"');

-- Query by key
SELECT name FROM products WHERE attributes -> 'color' = 'red';

-- Check if key exists
SELECT name FROM products WHERE attributes ? 'brand';

pgAudit — Compliance-Ready Audit Logging

For regulated industries (finance, healthcare, government), pgAudit provides detailed session and object-level audit logging that satisfies compliance requirements.

CREATE EXTENSION IF NOT EXISTS pgaudit;

-- Configure in postgresql.conf:
-- pgaudit.log = 'read, write, ddl'
-- pgaudit.log_relation = on

-- Now every SELECT, INSERT, UPDATE, DELETE, and DDL statement
-- gets logged with details about who ran it and what was accessed

The logs include:

  • Statement type (SELECT, INSERT, etc.)
  • Object accessed (table, view, function)
  • User and session info
  • Timestamp and duration

pg_cron — In-Database Job Scheduling

Need to run maintenance tasks, refresh materialized views, or clean up old data on a schedule? pg_cron lets you schedule jobs directly in PostgreSQL.

CREATE EXTENSION IF NOT EXISTS pg_cron;

-- Vacuum every night at 3 AM
SELECT cron.schedule('nightly-vacuum', '0 3 * * *', 'VACUUM ANALYZE');

-- Refresh a materialized view every hour
SELECT cron.schedule(
    'refresh-stats',
    '0 * * * *',
    'REFRESH MATERIALIZED VIEW CONCURRENTLY hourly_stats'
);

-- Delete old logs weekly
SELECT cron.schedule(
    'cleanup-logs',
    '0 4 * * 0',
    $$DELETE FROM logs WHERE created_at < NOW() - INTERVAL '90 days'$$
);

-- View scheduled jobs
SELECT * FROM cron.job;

-- Unschedule a job
SELECT cron.unschedule('nightly-vacuum');

pg_cron uses standard cron syntax: minute, hour, day of month, month, day of week. Use crontab.guru to help build expressions.


pg_repack — Online Table Maintenance

Tables get bloated over time as rows are updated and deleted. pg_repack reorganizes tables and indexes to reclaim space—and unlike VACUUM FULL , it doesn't lock the table for the entire operation.

# Repack a specific table
pg_repack -d mydb -t bloated_table

# Repack all tables in a database
pg_repack -d mydb

# Repack only indexes
pg_repack -d mydb -t mytable --only-indexes

This is especially valuable for high-traffic tables where you can't afford downtime for maintenance.


Comprehensive Extension Reference

Here's a quick reference of notable PostgreSQL extensions by category:

Geospatial & Location

ExtensionDescriptionUse Case
PostGISFull-featured GIS functionalityMaps, location queries, spatial analysis
PostGIS TopologyTopological data managementNetwork analysis, routing
pgRoutingRouting algorithms for PostGISNavigation, shortest path calculations
H3Uber's hexagonal hierarchical geospatial indexingEfficient geospatial aggregation

AI & Machine Learning

ExtensionDescriptionUse Case
pgvectorVector similarity searchEmbeddings, semantic search, RAG
PostgresMLIn-database ML training and inferenceML pipelines without data movement
pgaiAI toolkit for PostgresLLM integration, embedding generation
pg_embeddingAlternative vector extensionSimilarity search (predecessor to pgvector)

Performance & Monitoring

ExtensionDescriptionUse Case
pg_stat_statementsQuery statisticsPerformance tuning, slow query identification
auto_explainAutomatic query plan loggingDebugging slow queries in production
pg_stat_kcacheOS-level cache statisticsDeep performance analysis
pg_wait_samplingWait event samplingLock and I/O wait analysis
pg_hint_planExecution plan hintsForce specific query plans
hypopgHypothetical indexesTest index effectiveness without creating them

Data Types & Storage

ExtensionDescriptionUse Case
hstoreKey-value pairsSimple semi-structured data
ltreeHierarchical labelsTree structures, taxonomies
citextCase-insensitive textEmail addresses, usernames
cubeMultidimensional cubesScientific data, range queries
ip4rIPv4/IPv6 range typesNetwork management, CIDR blocks
pg_uuidv7UUIDv7 generationTime-sortable unique IDs
ExtensionDescriptionUse Case
pg_trgmTrigram matchingFuzzy search, autocomplete
unaccentRemove accents from textAccent-insensitive search
pg_bigm2-gram full-text searchCJK language support
fuzzystrmatchString similarity functionsSoundex, Levenshtein distance
dict_xsynExtended synonym dictionarySearch expansion

Security & Encryption

ExtensionDescriptionUse Case
pgcryptoCryptographic functionsPassword hashing, encryption
pgAuditAudit loggingCompliance, security monitoring
pg_permissionsPermission reportingSecurity audits
sslinfoSSL connection infoSecurity verification

Foreign Data Wrappers

ExtensionDescriptionUse Case
postgres_fdwConnect to other PostgreSQLDatabase federation
mysql_fdwConnect to MySQLCross-database queries
mongo_fdwConnect to MongoDBDocument store integration
redis_fdwConnect to RedisCache integration
file_fdwQuery CSV/text filesETL, log analysis
oracle_fdwConnect to OracleLegacy system integration

Utilities & Operations

ExtensionDescriptionUse Case
uuid-osspUUID generationPrimary keys, identifiers
pg_cronJob schedulingMaintenance tasks, periodic jobs
pg_repackOnline table reorganizationReclaim space without locks
pgstattupleTuple-level statisticsBloat analysis
pg_prewarmBuffer cache prewarmingFaster restarts
pg_partmanPartition managementTime-series data, archival

Indexing & Query Optimization

ExtensionDescriptionUse Case
btree_gistB-tree operators for GiSTExclusion constraints
btree_ginB-tree operators for GINMulti-column text search
bloomBloom filter indexesMany-column existence tests
rumAdvanced GIN indexesBetter full-text search ranking

Distributed & Scaling

ExtensionDescriptionUse Case
CitusDistributed PostgreSQLHorizontal scaling, sharding
pg_shardTable sharding (legacy)Scale-out writes
plproxyFunction-based shardingDistributed function calls

Graph & Advanced Data

ExtensionDescriptionUse Case
AGEGraph database (Cypher queries)Graph analytics, social networks
pgroongaFast full-text search engineJapanese/multilingual search
temporal_tablesSystem-versioned tablesHistorical data, auditing
timescaledbTime-series databaseIoT, metrics, monitoring

Wrapping Up

PostgreSQL extensions turn an already excellent database into something extraordinary. Whether you need geospatial queries with PostGIS, AI-powered search with pgvector, or just better visibility into query performance with pg_stat_statements, there's likely an extension that does exactly what you need.

The PostgreSQL extension ecosystem is mature and battle-tested. These aren't experimental toys—they're powering production systems at companies of all sizes. And because they integrate with PostgreSQL's core features like transactions, replication, and the query planner, you get reliability along with new capabilities.

Start with the basics: enable pg_stat_statements on every database (seriously, you'll thank yourself later). Then add extensions as your needs grow. That's the PostgreSQL way—a solid foundation you can build on endlessly.

Further Reading

Need help with your database? 💬

Connect to any database, write blazing-fast queries, and share them with your team.

Get MightyDB