浏览代码

build: fix generation of large .vdi images

Instead of loading the whole image into the memory when generating the
sha256 sum, we load the file in chunks and update the hash incrementally
to avoid MemoryError in python. Also remove a stray empty line.

Fixes: #13056
Signed-off-by: Adones Pitogo <[email protected]>
(mention empty line removal, adds Fixes from PR)
Signed-off-by: Christian Lamparter <[email protected]>
Adones Pitogo 2 年之前
父节点
当前提交
bdb4b78210
共有 1 个文件被更改,包括 8 次插入2 次删除
  1. 8 2
      scripts/json_add_image_info.py

+ 8 - 2
scripts/json_add_image_info.py

@@ -13,7 +13,6 @@ if len(argv) != 2:
 json_path = Path(argv[1])
 file_path = Path(getenv("FILE_DIR")) / getenv("FILE_NAME")
 
-
 if not file_path.is_file():
     print("Skip JSON creation for non existing file", file_path)
     exit(0)
@@ -37,7 +36,14 @@ def get_titles():
 
 
 device_id = getenv("DEVICE_ID")
-hash_file = hashlib.sha256(file_path.read_bytes()).hexdigest()
+
+sha256_hash = hashlib.sha256()
+with open(str(file_path),"rb") as f:
+    # Read and update hash string value in blocks of 4K
+    for byte_block in iter(lambda: f.read(4096),b""):
+        sha256_hash.update(byte_block)
+
+hash_file = sha256_hash.hexdigest()
 
 if file_path.with_suffix(file_path.suffix + ".sha256sum").exists():
     hash_unsigned = (