Database Optimization: Indexing, Queries, and Performance
Last updated: October 2025 • Reviewed by database architects
Your database is slow. Pages take forever to load. Users are complaining. Sound familiar? I've been there. The good news? Most database performance problems have simple solutions. You don't need to be a DBA wizard - you just need to understand a few key concepts.
I've seen databases go from 10 second queries to 10 milliseconds with one index. I've watched apps handle 10x more traffic after query optimization. This guide covers the techniques that actually move the needle in production.
📇Indexing: Your First Line of Defense
Think of an index like a book's table of contents. Without it, you'd have to read every page to find what you want. With it, you jump straight to the right page. Databases work the same way.
🐢 Without Index
-- Full table scan!
SELECT * FROM users
WHERE email = 'john@example.com';
-- Database reads ALL rows
-- 1,000,000 rows = 10 secondsScans every single row. Gets slower as table grows.
⚡ With Index
-- Uses index!
CREATE INDEX idx_email ON users(email);
SELECT * FROM users
WHERE email = 'john@example.com';
-- Database jumps to the row
-- 1,000,000 rows = 10 millisecondsDirect lookup. Consistent speed regardless of table size.
When to Add Indexes:
⚠️ Index Gotchas:
Compound Indexes (The Secret Weapon):
When you filter by multiple columns, compound indexes are magic:
-- Query that uses multiple columns
SELECT * FROM orders
WHERE user_id = 123
AND status = 'pending'
AND created_at > '2025-01-01';
-- Create compound index (order matters!)
CREATE INDEX idx_orders_lookup
ON orders(user_id, status, created_at);
-- Most selective column first!
-- user_id (narrows to specific user)
-- status (narrows to pending orders)
-- created_at (final filter)🔍Query Optimization: Write Smarter SQL
Even with perfect indexes, bad queries kill performance. Here are the patterns I see causing problems constantly:
❌ The SELECT * Problem:
-- Bad: Fetches ALL columns (waste bandwidth)
SELECT * FROM users WHERE id = 1;
-- Returns: id, email, password, name, bio,
-- avatar, created_at, updated_at, settings...-- Good: Only fetch what you need
SELECT id, name, email FROM users WHERE id = 1;
-- 10x faster, less memory, cleaner code❌ The N+1 Query Problem:
This one is BRUTAL. It happens when you loop through results and query inside the loop:
// Bad: 101 queries! (1 + 100)
const users = await User.findAll(); // 1 query
for (const user of users) {
const posts = await Post.findAll({
where: { userId: user.id }
}); // 100 queries!
}// Good: 1 query with JOIN
const users = await User.findAll({
include: [{ model: Post }]
});
// Or use IN clause
const userIds = users.map(u => u.id);
const posts = await Post.findAll({
where: { userId: userIds }
});❌ The LIKE '%pattern%' Problem:
-- Bad: Can't use index (leading wildcard)
SELECT * FROM products
WHERE name LIKE '%phone%';
-- Scans entire table!-- Better: No leading wildcard
SELECT * FROM products
WHERE name LIKE 'phone%';
-- Can use index!
-- Best: Full-text search
CREATE FULLTEXT INDEX idx_name ON products(name);
SELECT * FROM products
WHERE MATCH(name) AGAINST('phone');📈Scaling Strategies for Growth
Vertical Scaling
Bigger machine, more power
Read Replicas
Split reads across servers
Sharding
Split data across databases
Caching: The Performance Multiplier
The fastest database query is the one you don't make. Cache everything you can:
// Redis caching example
const Redis = require('redis');
const client = Redis.createClient();
async function getUser(userId) {
// Check cache first
const cached = await client.get(`user:${userId}`);
if (cached) {
return JSON.parse(cached); // Cache hit!
}
// Cache miss - query database
const user = await db.query(
'SELECT * FROM users WHERE id = ?',
[userId]
);
// Store in cache (expire after 1 hour)
await client.setEx(
`user:${userId}`,
3600,
JSON.stringify(user)
);
return user;
}
// Result: 100x faster for repeated queries📊Monitoring and Debugging
Essential Queries to Monitor:
-- PostgreSQL: Find slow queries
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;
-- MySQL: Enable slow query log
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1;
-- Check query execution plan
EXPLAIN ANALYZE
SELECT * FROM orders WHERE user_id = 123;✅Database Optimization Checklist
🎯 The Bottom Line
Start with the basics: add indexes to your WHERE clauses, stop using SELECT *, and cache frequently accessed data. These three changes alone will solve 80% of database performance issues. As you grow, add read replicas and consider sharding. But honestly, most apps never need that level of complexity. Just index properly and write smart queries.
About the Authors
Written by DevMetrix's database team with experience optimizing databases handling millions of queries per second at scale. We've seen it all - from startups to enterprise systems.
✓ Reviewed by database architects • ✓ Updated October 2025 • ✓ Based on production experience
🔧Test Your Database Queries
Use our API tester to check your database-backed endpoints. Monitor response times and optimize slow queries before they impact users.
Try API Tester →