瀏覽代碼

lib/db: Adjust transaction flush sizes downwards (#6686)

This reduces the size of our write batches before we flush them. This
has two effects: reducing the amount of data lost if we crash when
updating the database, and reducing the amount of memory used when we do
large updates without checkpoint (e.g., deleting a folder).

I ran our SyncManyFiles benchmark as it is the one doing most
transactions, however there was no relevant change in any metric (it's
limited by our fsync I expect). This is good as any visible change would
just be a decrease in performance.

I don't have a benchmark on deleting a large folder, taking that part on
trust for now...
Jakob Borg 5 年之前
父節點
當前提交
f78133b8e9
共有 1 個文件被更改,包括 11 次插入4 次删除
  1. 11 4
      lib/db/backend/leveldb_backend.go

+ 11 - 4
lib/db/backend/leveldb_backend.go

@@ -13,10 +13,17 @@ import (
 )
 
 const (
-	// Never flush transactions smaller than this, even on Checkpoint()
-	dbFlushBatchMin = 1 << MiB
-	// Once a transaction reaches this size, flush it unconditionally.
-	dbFlushBatchMax = 128 << MiB
+	// Never flush transactions smaller than this, even on Checkpoint().
+	// This just needs to be just large enough to avoid flushing
+	// transactions when they are super tiny, thus creating millions of tiny
+	// transactions unnecessarily.
+	dbFlushBatchMin = 64 << KiB
+	// Once a transaction reaches this size, flush it unconditionally. This
+	// should be large enough to avoid forcing a flush between Checkpoint()
+	// calls in loops where we do those, so in principle just large enough
+	// to hold a FileInfo plus corresponding version list and metadata
+	// updates or two.
+	dbFlushBatchMax = 1 << MiB
 )
 
 // leveldbBackend implements Backend on top of a leveldb