filecoin-存儲證明子系統(rust-fil-proofs)[翻譯]

什麼是rust-fil-proofs

官網: https://github.com/filecoin-project/rust-fil-proofs

The Filecoin Proving Subsystem provides the storage proofs required by the Filecoin protocol. It is implemented entirely in Rust,
Filecoin證明子系統提供Filecoin協議所需的存儲證明。它完全Rust實現,

storage-proofs is intended to serve as a reference implementation for Proof-of-Replication (PoRep), while also performing the heavy lifting for filecoin-proofs.
存儲證明旨在用作複製證明(PoRep)的參考實現,同時還執行filecoin證明的繁重工作。

主要組件:

PoR(可檢索性證明:Merkle包含證明)

DrgPoRep(複製的深度健壯圖證明)

StackedDrgPoRep

PoSt(時空證明)

Filecoin Proofs (filecoin-proofs) A wrapper around storage-proofs, providing an FFI-exported API callable from C (and in practice called by go-filecoin via cgo). Filecoin-specific values of setup parameters are included here, and circuit parameters generated by Filecoin’s (future) trusted setup will also live here.

Filecoin Proofs(Filecoin Proofs)存儲證明的包裝器,提供一個可從C調用的FFI導出的API(在實踐中由go Filecoin通過cgo調用)。這裏包括特定於Filecoin的設置參數值,由Filecoin(將來)受信任的設置生成的電路參數也將存在於此。

Earlier in the design process, we considered implementing what has become the FPS in Go – as a wrapper around potentially multiple SNARK circuit libraries. We eventually decided to use bellman – a library developed by Zcash, which supports efficient pedersen hashing inside of SNARKs. Having made that decision, it was natural and efficient to implement the entire subsystem in Rust. We considered the benefits (self-contained codebase, ability to rely on static typing across layers) and costs (developer ramp-up, sometimes unwieldiness of borrow-checker) as part of that larger decision and determined that the overall project benefits (in particular ability to build on Zcash’s work) outweighed the costs.

在設計過程的早期,我們考慮在Go中實現FPS,作爲潛在多個SNARK電路庫的包裝。**我們最終決定使用bellman——Zcash開發的一個庫,它支持SNARKs內部高效的pedersen散列。**做了這個決定之後,在Rust中實現整個子系統是自然而有效的。我們認爲這些好處(獨立的代碼庫、跨層依賴靜態類型的能力)和成本(開發人員的快速增長,有時是借閱檢查器的笨拙)是這個更大決策的一部分,並確定整個項目的好處(特別是在Zcash工作基礎上構建的能力)超過了成本。

We also considered whether the FPS should be implemented as a standalone binary accessed from go-filecoin either as a single-invocation CLI or as a long-running daemon process. Bundling the FPS as an FFI dependency was chosen for both the simplicity of having a Filecoin node deliverable as a single monolithic binary, and for the (perceived) relative development simplicity of the API implementation.

我們還考慮了FPS是否應該實現爲一個獨立的二進制文件,可以通過go filecoin進行訪問,既可以作爲單個調用CLI,也可以作爲一個長時間運行的守護進程。選擇將FPS捆綁爲FFI依賴項,是因爲將Filecoin節點作爲單個單片二進制文件交付的簡單性,以及API實現的(感知的)相對開發簡單性。

However, the majority of technical problems associated with calling from Go into Rust are now solved, even while allowing for a high degree of runtime configurability.
然而,與從Go-into-Rust調用相關的大多數技術問題現在已經解決,即使允許高度的運行時可配置。

構建
NOTE: rust-fil-proofs can only be built for and run on 64-bit platforms; building will panic if the target architecture is not 64-bits.
注意:rust fil proof只能爲64位平臺構建和運行;如果目標架構不是64位,則構建將報錯。

Optimizing for either speed or memory during replication
在複製期間優化速度或內存

While replicating and generating the Merkle Trees (MT) for the proof at the same time there will always be a time-memory trade-off to consider, we present here strategies to optimize one at the cost of the other.
複製和生成Merkle樹(MT)作爲證明的同時,總是要考慮時間-內存的權衡,我們在這裏提出了以犧牲另一個爲代價來優化一個的策略。

  • Speed 速度

One of the most computational expensive operations during replication (besides the encoding itself) is the generation of the indexes of the (expansion) parents in the Stacked graph, implemented through a Feistel cipher (used as a pseudorandom permutation). To reduce that time we provide a caching mechanism to generate them only once and reuse them throughout replication (across the different layers). Already built into the system it can be activated with the environmental variable
One of the most computational expensive operations during replication (besides the encoding itself) is the generation of the indexes of the (expansion) parents in the Stacked graph, implemented through a Feistel cipher (used as a pseudorandom permutation). To reduce that time we provide a caching mechanism to generate them only once and reuse them throughout replication (across the different layers). Already built into the system it can be activated with the environmental variable

複製過程中(除了編碼本身之外)計算開銷最大的操作之一是生成父級索引堆棧圖(擴展),該索引是通過Feistel密碼(用作僞隨機置換)實現的。爲了減少這一時間,我們提供了一種緩存機制,只生成一次並在複製過程中重用它們(跨不同層)。已經內置在系統中,可以使用環境變量激活

FIL_PROOFS_MAXIMIZE_CACHING=1

To check that it’s working you can inspect the replication log to find using parents cache of unlimited size. As the log indicates, we don’t have a fine grain control at the moment so it either stores all parents or none. This cache can add almost an entire sector size to the memory used during replication, if you can spare it though this setting is very recommended as it has a considerable impact on replication time.
要檢查它是否正常工作,可以檢查複製日誌以使用大小不受限制的父緩存查找。如日誌所示,我們目前沒有一個精細的控制,所以它要麼存儲所有的父,要麼沒有。此緩存可以將幾乎整個扇區大小添加到複製過程中使用的內存中,如果您可以將其釋放出來,儘管建議使用此設置,因爲它對複製時間有相當大的影響。

(You can also verify if the cache is working by inspecting the time each layer takes to encode, encoding, layer: in the log, where the first two layers, forward and reverse, will take more time than the rest to populate the cache while the remaining 8 should see a considerable time drop.)
(您還可以通過檢查每一層編碼、編碼、層所需的時間來驗證緩存是否工作:在日誌中,前兩層正向和反向填充緩存所需的時間比其餘兩層要長,而其餘8層則會出現相當長的時間下降。)

Speed Optimized Pedersen Hashing - we use Pedersen hashing to generate Merkle Trees and verify Merkle proofs. Batched Pedersen hashing has the property that we can pre-compute known intermediary values intrinsic to the Pedersen hashing process that will be reused across hashes in the batch. By pre-computing and cacheing these intermediary values, we decrease the runtime per Pedersen hash at the cost of increasing memory usage. We optimize for this speed-memory trade-off by varying the cache size via a Pedersen Hash parameter known as the “window-size”. This window-size parameter is configured via the pedersen_hash_exp_window_size setting in storage-proofs. By default, Bellman has a cache size of 256 values (a window-size of 8 bits), we increase the cache size to 65,536 values (a window-size of 16 bits) which results in a roughly 40% decrease in Pedersen Hash runtime at the cost of a 9% increase in memory usage. See the Pedersen cache issue for more benchmarks and expected performance
Pedersen hash 性能優化-我們使用Pedersen散列生成Merkle樹並驗證Merkle證明。批處理的Pedersen散列具有這樣的屬性:我們可以預先計算Pedersen散列過程的內部已知中介值,這些值將在批處理中的散列之間重用。通過預計算和緩存這些中間值,我們減少了每個佩德森散列的運行時,同時增加了內存使用量。我們通過一個稱爲“窗口大小”的Pedersen散列參數來改變緩存大小,從而優化這種速度內存權衡。此窗口大小參數是通過存儲證明中的 pedersen_hash_exp_window_size 設置配置的。默認情況下,Bellman的緩存大小爲256個值(窗口大小爲8位),我們將緩存大小增加到65536個值(窗口大小爲16位),這將導致Pedersen哈希運行時減少大約40%,而內存使用量增加了9%。請參閱Pedersen緩存問題以獲取更多基準和預期的性能效果。

  • 內存
    At the moment the default configuration is set to reduce memory consumption as much as possible so there’s not much to do from the user side. (We are now storing MTs on disk, which were the main source of memory consumption.) You should expect a maximum RSS between 1-2 sector sizes, if you experience peaks beyond that range please report an issue (you can check the max RSS with the /usr/bin/time -v command).
    目前,默認配置被設置爲儘可能減少內存消耗,因此用戶端沒有太多工作要做。(我們現在正在磁盤上存儲MTs,這是內存消耗的主要來源。)您應該期望最大RSS在1-2個扇區大小之間,如果您遇到超出該範圍的峯值,請報告一個問題(您可以使用/usr/bin/time -v命令檢查最大RSS)。

Memory Optimized Pedersen Hashing - for consumers of storage-proofs concerned with memory usage, the memory usage of Pedersen hashing can be reduced by lowering the Pederen Hash window-size parameter (i.e. its cache size). Reducing the cache size will reduce memory usage while increasing the runtime per Pedersen hash. The Pedersen Hash window-size can be changed via the setting pedersen_hash_exp_window_size in settings.rs. See the Pedersen cache issue for more benchmarks and expected performance effects.
內存優化的Pedersen哈希-對於關注內存使用的存儲證明的使用者,可以通過降低Pederen哈希窗口大小參數(即其緩存大小)來減少Pedersen哈希的內存使用。減少緩存大小將減少內存使用,同時增加每個Pedersen哈希的運行時數。Pedersen Hash窗口大小可以通過設置Pedersen_Hash_exp_window_size來更改設置.rs。有關更多基準和預期性能影響,請參閱Pedersen緩存問題。

The following benchmarks were observed when running replication on 1MiB (1024 kibibytes) of data on a new m5a.2xlarge EC2 instance with 32GB of RAM for Pedersen Hash window-sizes of 16 (the current default) and 8 bits:
在一個新的m5a.2xlarge EC2實例上對1MiB(1024 kibibytes)的數據運行復制時,觀察到了以下基準,對於Pedersen哈希窗口大小爲16(當前默認值)和8位的32GB RAM:

$ cargo build --bin benchy --release
$ env time -v cargo run --bin benchy --release -- stacked --size=1024

window-size: 16
User time (seconds): 87.82
Maximum resident set size (kbytes): 1712320

window-size: 8
User time (seconds): 128.85
Maximum resident set size (kbytes): 1061564

Note that for a window-size of 16 bits the runtime for replication is 30% faster while the maximum RSS is about 40% higher compared to a window-size of 8 bits.
請注意,對於16位的窗口大小,複製的運行時比8位的窗口大小快30%,而最大RSS大約高40%。

Feistel

Feistel密碼原理與實現
原文鏈接:https://blog.csdn.net/android_jiangjun/article/details/79378137

Feistel 密碼結構是用於分組密碼中的一種對稱結構。以它的發明者 Horst Feistel 爲名,而Horst Feistel 本人是一位物理學家兼密碼學家,在他爲 IBM 工作的時候,爲Feistel 密碼結構的研究奠定了基礎。很多密碼標準都採用了Feistel 結構,其中包括DES。

大多數分組密碼的結構本質上都是基於Feistel網絡結構,因此,瞭解Feistel密碼結構對於學習分組密碼算法是非常有幫助的。

分組密碼(Feistel密碼結構)

分組密碼是將明文消息編碼後的數字序列劃分成長爲N的分組(長爲N的矢量),分別在密鑰k=(k0,k1,…kt-1)的控制下變換成等長的輸出數字序列(這個序列是長爲M的向量,即輸入和輸出分組的長度可以不同)。它與流密碼的不同在於輸出的每一位數字不僅與相應時刻輸入的明文數字有關,而是與一組長爲n的明文數字有關。分組密碼的本質實際上是字長爲n的數字序列的代換密碼。

爲保證安全性,設計的算法應當滿足以下要求:

1.分組長度n要足夠大,防止明文窮舉攻擊奏效。

2.密鑰量要足夠大(即向量k要足夠長),並且儘可能消除弱的密鑰,使所有密鑰同等的好,防止密鑰窮舉攻擊奏效。但密鑰本身又不能過長,否則難以管理。

3.由密鑰確定置換的算法要足夠複雜,以抵抗差分攻擊和線性攻擊。

4.加解密運算簡單且易於實現。

5.數據擴展和差錯傳播儘可能小。

什麼是Pedersen Hash?

2019年過去了一半,它會是未來最好的一年麼?
參考URL: https://www.tuoluocaijing.cn/article/detail-49313.html
Zcash - 深入淺出 Pedersen Hash/Commitment 計算
參考URL: https://www.chainnews.com/articles/179526099055.htm
什麼是Pedersen Hash?
原文鏈接:https://blog.csdn.net/mutourend/article/details/93508243

ZCash 用 Pedersen Hash 替換掉了 SHA256。
從傳統上看,Pedersen Hash 是一個存在了很多很多年的古老算法,一直被認爲非常低效而已經被人遺忘。但是在 零知識證明技術 zkSNARK 中,Pedersen Hash 的構造電路卻可以非常精簡,性能居然出奇地好。電路規模大概只有 SHA256 的 三十分之一。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章