Building a Fast and Compact SQLite Cache Store
Discover why bun:sqlite, combined with Msgpackr serialization, offers an efficient solution for building fast and compact cache stores.
When working on applications, caching is crucial for enhancing performance by reducing the need for repeated database fetches. Among the various SQLite libraries available, Bun’s native integration is optimized for performance and takes advantage of its non-blocking I/O capabilities.
The
bun:sqlite
module is roughly 3-6x faster thanbetter-sqlite3
and 8-9x faster thandeno.land/x/sqlite
for read queries. Each driver was benchmarked against the Northwind Traders dataset. View and run the benchmark source. Source
In addition to efficient caching, serializing JavaScript objects can be slow when using JSON. This is why it makes sense to opt for efficient binary encoding alternatives like Msgpackr or CBOR. These formats are faster to parse, support complex data types, require less CPU usage, and store data more compactly, further enhancing overall application performance.
With this in mind, let’s explore how to build a cache manager using bun:sqlite
along with efficient binary encoding.
A cache-manager Store
cache-manager
provides a straightforward and intuitive API for caching, abstracting away the complexity of managing different cache stores and their configurations. With support for multiple stores, built-in expiration and TTL management, and robust error handling and fallback mechanisms, it ensures data integrity and freshness.
Additionally, cache-manager
is highly customizable and extensible, allowing you to create custom cache stores tailored to your needs. This flexibility means you can set up and use caching with minimal code, allowing you to focus on your application’s core logic.
The required interface to fulfill as a cache-manager store is as follows:
export type Store = {
get<T>(key: string): Promise<T | undefined>;
set<T>(key: string, data: T, ttl?: Milliseconds): Promise<void>;
del(key: string): Promise<void>;
reset(): Promise<void>;
mset(arguments_: Array<[string, unknown]>, ttl?: Milliseconds): Promise<void>;
mget(...arguments_: string[]): Promise<unknown[]>;
mdel(...arguments_: string[]): Promise<void>;
keys(pattern?: string): Promise<string[]>;
ttl(key: string): Promise<number>;
};
See more here.
The Queries
Configuring SQLite for Optimal Performance
PRAGMA main.synchronous = NORMAL;
: Ensures that SQLite writes are fast while still maintaining a reasonable level of data safety. It does not guarantee as much durability as FULL, but it is sufficient for many use cases.PRAGMA main.journal_mode = WAL2;
: Improves concurrency by allowing readers to access the database while a write operation is ongoing.PRAGMA main.auto_vacuum = INCREMENTAL;
: Allows SQLite to reclaim unused space incrementally, rather than all at once.
1. Creating the Cache Table
CREATE TABLE IF NOT EXISTS {table} (
key TEXT PRIMARY KEY,
val BLOB,
created_at INTEGER,
expire_at INTEGER
);
CREATE INDEX IF NOT EXISTS index_expire_{table} ON {table}(expire_at);
The val
column stores the cached value in a binary large object (BLOB) format, allowing it to handle various data types depending on the chosen serializer.
2. Inserting or Updating Cache Entries
INSERT OR REPLACE INTO ${name}(key, val, created_at, expire_at) VALUES (?, ?, ?, ?)
expire_at
is pre-calculated based on the ttl
value in milliseconds, and val
is pre-checked by a function isCacheable: (value: unknown) => boolean;
.
3. Retrieving Cache Entries
SELECT * FROM ${name} WHERE key = ? AND expire_at > ? LIMIT 1
Returns one record that has not expired.
Auto Purge and Batch Operations
1. purgeExpired
const purgeExpired = async () => {
const now = Date.now();
if (now - lastPurgeTime >= 60 * 60 * 1000) {
const statement = db.prepare(`DELETE FROM ${name} WHERE expire_at < ?`);
statement.run(now);
lastPurgeTime = now;
}
};
Keeps the cache clean and efficient by regularly removing stale entries, ensuring that the cache does not grow indefinitely and affect performance.
2. Batch Set (mset)
const mset = async (pairs: [string, unknown][], ttl?: number) => {
const ttlValue = ttl !== undefined ? ttl * 1000 : defaultTtl;
if (ttlValue < 0) {
return;
}
const expireAt = Date.now() + ttlValue;
const stmt = `INSERT OR REPLACE INTO ${name} (key, val, created_at, expire_at) VALUES ${pairs.map(() => '(?, ?, ?, ?)').join(', ')}`;
const bindings = pairs.flatMap(([key, value]) => {
if (!isCacheable(value)) {
throw new NoCacheableError(`"${value}" is not a cacheable value`);
}
return [key, serializerAdapter.serialize(value), Date.now(), expireAt];
});
const statement = db.prepare(stmt);
statement.run(...bindings);
};
Improves efficiency by reducing the number of individual database operations. In the same way mget
executes one single query to returns valid records with the query:
SELECT * FROM ${name} WHERE key IN (${placeholders}) AND expire_at > ?
Conclusion
Bun’s SQLite implementation combined with efficient binary encoding formats like Msgpackr provides a powerful solution for building fast and compact cache stores.
For the complete source code and implementation details, visitt: https://github.com/sjdonado/cache-manager-bun-sqlite3
You can also find the npm package cache-manager-bun-sqlite3.
Happy hacking!