diff --git a/content/700-optimize/300-recordings.mdx b/content/700-optimize/300-recordings.mdx index 71f84f56b9..64cd4f7204 100644 --- a/content/700-optimize/300-recordings.mdx +++ b/content/700-optimize/300-recordings.mdx @@ -45,6 +45,7 @@ When a recording session ends, Optimize generates recommendations such as: - [Using `@db.Char(n)`](/optimize/recommendations/avoid-char) - [Using `@db.VarChar(n)`](/optimize/recommendations/avoid-varchar) - [Using `timestamp(0)` or `timestamptz(0)`](/optimize/recommendations/avoid-timestamp-timestampz-0) +- [Storing large objects or BLOBs in the database](/optimize/recommendations/storing-blob-in-database) - [Indexing on unique columns](/optimize/recommendations/indexing-on-unique-columns) - [Long-running transactions](/optimize/recommendations/long-running-transactions) - [Unnecessary indexes](/optimize/recommendations/unnecessary-indexes) diff --git a/content/700-optimize/400-recommendations/1400-storing-blob-in-database.mdx b/content/700-optimize/400-recommendations/1400-storing-blob-in-database.mdx new file mode 100644 index 0000000000..22824e7d72 --- /dev/null +++ b/content/700-optimize/400-recommendations/1400-storing-blob-in-database.mdx @@ -0,0 +1,30 @@ +--- +title: 'Storing large objects or BLOBs in the database' +metaTitle: 'Optimize recommendations: Avoid storing large objects or BLOBs in the database' +metaDescription: "Learn about the recommendations for avoiding the storage of large objects or BLOBs in the database." +tocDepth: 3 +toc: true +--- + +Optimize provides recommendations to help identify and resolve performance issues caused by storing large objects in the database. It also suggests alternative approaches to mitigate these challenges. + +The following model uses the `Bytes` type: + +```prisma +model User { + id Int @id @default(autoincrement()) + name String? + // Storing raw image data directly in the database + avatarBytes Bytes? +} +``` + +## What is the problem? + +Storing large binary objects (such as images) in the database can lead to several challenges: + +- **Excessive storage usage**: Large objects occupy significant space in the database, complicating management. +- **Increased I/O load**: Handling large objects adds strain to the database's input/output operations. +- **Slower query performance**: Most traditional databases are not optimized for efficiently serving large binary content, resulting in performance degradation during queries or updates. + +Moreover, storing large objects directly in the database can cause backups to become disproportionately large, increasing the time required for restoration processes. Serving these files through the database also creates a performance bottleneck, particularly under high traffic or frequent access scenarios.