
In this post, we will talk specifically about the work we performed as part of our security assessment of the Multiplier.Finance environment. The public report provided to the Multiplier team will be published within the FAQ section of their web portal.
Executive Summary: There are no unresolved security findings present in the system following our review. There are zero (0) critical, high, or medium vulnerabilities in our final audit report.
The Multiplier.Finance environment is a multi-tier infrastructure within AWS, operated on Binance Smart Chain (BSC), and uses its own governance tokens (bMXX). Even though, they had completed an initial audit of their smart contracts, the Multiplier team wished to bring additional confidence to their community with a completed review of the entire infrastructure and deployment environment.
When Kudelski approaches an engagement such as one of the scope of this, we propose a multiple-phase review because security is always much larger than just a smart contract review. Security of an “App” or a “DAPP” or a “Site” is about the infrastructure, flows, contracts, wallets, and any other ingress and egress flows and control points that could impact the money flow. Even though the math under a blockchain is very solid at this point, contracts and infrastructure are not inherently secure, so it is the sign of a mature project to ask for a complete assessment of their work.
Initially, we performed a re-review of the Smart Contracts as it is always best practice to have multiple reviews of critical components, and our review of the Smart Contract code was the 2nd such review. We found no critical or high risk issues in the smart contracts, and all of our low/informational findings were related to dependencies or minor concerns with style or flow.
The code that we reviewed resides in a public repository at https://github.com/Multiplier-Finance/MCL-SmartContracts.
The reviews are based on the commit hash:
MCL-SmartContracts: cff17d6e07b51e7468a4aba72ae83b309b98d561
All third-party libraries were deemed out-of-scope for this review and are expected to work as designed. Based on the criticality of the dependency, we looked at the current state of the third-party libraries included when necessary.
Our general process for this review included:
Threat Model & Architecture Review
Code Review
Recommendations
We maintained a complete and consistent view across the known components and followed a systematic approach as we conducted the threat model workshop and code review. First threat actors of concern were identified and data flows between the system components were requested. Based upon the understanding of each component from documentation and the interviews, remote follow-up meetings were held with team members of Multiplier.Finance for clarification of any technical or functional details, followed by a code review.
In addition to infrastructure, the following scenarios were in scope for the Threat Model & Assessment:
Upon analysis of the infrastructure, contracts, and control points – we determined that the Multiplier team has handled all of these threat scenarios effectively.
As a result of our code review & assessment, we discovered 0 High, 0 Medium, 3 Low, and 15 Informational findings. The Multiplier team resolved all of these findings to our satisfaction.
We want to thank the Multiplier team for choosing Kudelski Security.
About Multiplier Finance
Multiplier.Finance operates a system known as “Multi-Chain Lend.” Multi-Chain (Lend) is an algorithmic money market system designed to bring secure and unique lending and borrowing opportunities like flash loans onto the Binance Smart Chain. The protocol designs are architected and forked based on Aave with revenue sharing components for liquidity providers and token holders that govern the protocol. bMXX, a BEP-20 token, will be the governance token of Multi-Chain (Lend).
ING (Dutch bank) recently released their own implementation of the popular Gennaro-Goldfeder’18 Threshold ECDSA signature scheme in the form of a library written in Rust. Kudelski Security audited their code, our report is available here. During the audit, we found a potentially serious problem in the protocol itself (not dependent on ING’s implementation, but rather on the need of a security assumption in the original protocol that is not given for granted in many real-world cases.) This problem might allow a single malicious attacker to delete or lock funds and blackmail all other peers.
Threshold signature schemes (TSS) have seen a growing interest and rapid adoption in the last few years, mainly driven by blockchain applications such as Bitcoin. A blockchain transaction (such as the transfer of funds, or the execution of a smart contract) in its basic form is created and verified by the network via a digital signature generated by a private signing key (usually held in a user’s wallet.) This presents a security challenge: anyone with access to the private signing key can issue an (irrevocable) transaction. Clearly an unacceptable risk for highly regulated environments such as large financial institutions, which cannot afford to lose billions of dollars with a single, irreversible transaction executed by a malicious hacker who manages to exfiltrate a high-value key.
TSSs solve this problem by splitting the generation of the signature between N different users, with the possibility of configuring a threshold parameter T such that any subset of T users (among the N authorized ones) is sufficient to generate the signature, but any subset of T-1 or less users is not. This reminds a bit of another similar cryptographic primitive, called “secret sharing schemes” (SSS) but it’s actually different: In a SSS, a subset of T among N users collaborate to have a leader reconstruct a secret (for example, a signing key.) But, after the reconstruction, the leader knows that secret and can further use it however they wish, even without requiring the collaboration of the other users, so it is again a security risk. In a TSS, instead, the signing key is never fully reconstructed! Each one of the T users interact in a complex multi-party computation (MPC) protocol by contributing with a partial signature. Those partial signatures are then recombined by a leader to obtain a single valid signature. The difference is that the leader cannot reuse the data collected to generate a different signature, so the scheme is highly interactive and requires different rounds of complex, interlocking encryption, signatures, and zero-knowledge subprotocols.
The appeal of TSS for many applications is that they are backward-compatible with already deployed signature schemes: the signature generated by a TSS is indistinguishable by a “normal” signature in respect to a certain public key, so at the end the verifying user does not care whether the signature was generated in a “standard” way or in a TSS way, because there is only one single public key to check against, and the verification algorithm is unmodified. So, for example, it is not necessary to modify the Bitcoin protocol to support a new type of signatures, it is just sufficient to add TSS support client-side.
Not all signature schemes are equivalent in respect to “TSS-ization” though. Certain schemes, such as BLS or Schnorr signatures, are much easier to implement in a TSS way. Others, such as ECDSA, are much more complex. It just happens by bad luck that ECDSA is the currently adopted standard in Bitcoin and many other important blockchains. Existing TSS schemes for ECDSA are quite difficult to analyse and implement and they are prone to issues, either in their design or code.
During our audit for ING we stumbled upon one of these issues. ING’s library implements GG18, one of the most popular TSS schemes for ECDSA. One of the features of GG18 is the “key resharing protocol”, which allows an old committee of peers to refresh the shares of the secret key across a new committee (for example, if it is necessary to add new members, or to remove some.) The key resharing protocol is a delicate procedure that has been covered by many recent attacks. One of these attacks, the “forget-and-forgive”, is described as:

The proposed mitigation is:

This mitigation is implemented correctly by ING. However, we found that such mitigation is insufficient – and might actually make things worse – if a robust broadcast channel is not available, by allowing a single malicious attacker to delete or lock funds and blackmail all other peers.
A robust broadcast channel is a reliable way of broadcasting messages in such a way that all parties receive the same broadcast message. This is easily achieved if a trusted third party relay is available, but for the general case of peer-to-peer networks, it is not always trivial. For example, the naive solution of sending a broadcast message as N distinct direct point-to-point connections is clearly not robust, because a malicious sender could transmit a message X to certain peers and a different message Y to other nodes.
This is exactly the scenario that enables a devastating attack in the GG18 resharing protocol. Let’s say a group of 4 peers (A, B, C, D) with threshold value 3 (any 3 of them is necessary and sufficient to sign) wants to add a fifth peer E to the committee (still with threshold value 3). So they start the resharing protocol, and in order to avoid the “forget-and-forgive” attack they conclude the protocol with the final round described in the mitigation above.
However, E is malicious. After the resharing protocol concludes, E crafts different messages for different peers instead of broadcasting a final “ACK”:
Now see what happens: A and B think that everything went fine, so they discard the secret key material (their “shares”) that was related to the old committee configuration, and migrate to the shares for the new committee configuration. C and D, however, follow the proposed mitigation to the “forget-and-forgive” attack: they assume that something went wrong and will not save the new shares to disk, falling back to the old shares. And now we have a problem: a single malicious adversary, E, has managed to “segment” the group of peers into two committees, one old and one new, where none of them has enough information to reconstruct the secret without E’s collaboration! So Eve can blackmail the committee, withholding all the funds associated to the wallet being shared.
Under the right circumstances, this attack can be devastating. It is important to notice that this has nothing to do with ING’s implementation, nor with a flaw in the security proof of GG18, but rather with the security model not taking into account real-world implementations of the communication channel. Countermeasures against these kind of attack can only be implemented at an application level, so developers have to be careful when adopting existing GG18 solutions.
We are grateful to ING for the trust they gave us, and found very stimulating working together with their tech team.
Proton Blockchain – “The Payment Blockchain” is a software blockchain developed by Metallicus, Inc., based on the foundation of EOSIO which identifies itself as having advantages over blockchain solutions due to its Speed, unique Identity ID, Signing system, and Fee structure.
During the initial phase of the project, we recognized that the main code base is comprised of a fork of the EOSIO block chain project with the tag v1.9.1.
This code was de-scoped from the assessment as we recognized that the EOSIO project is well used and therefore was essentially previously reviewed. The project did use the forked version in the Proton Chain repository as a reference when needed.
In this post, we will talk specifically about the work we performed as part of our security assessment for the Metallicus team. For a more in-depth overview of Proton and its roadmap, you can read about it on the Proton blog.
The source code for the project was supplied by Metallicus through the GitHub repository at https://github.com/ProtonProtocol and specifically under the proton.contracts project.
The assessment was conducted by the Kudelski Security Team, with the tests taking place in the 4th Quarter of 2020 and focused on the following objectives:
A separate engagement, also performed in the 4th Quarter of 2020 was a review of the Proton Swap tool, which is a tool allows users to swap an XPR based on the Ethereum blockchain to mainnet XPR – which opens all other functions of the Proton Blockchain.
The findings during the review were less than many of our other projects of similar size and complexity, with the findings being mostly initialization issues, arithmetic operational checks, resource allocation and code clarity on permission checks.
We believe that the reason for the rather low severity of the findings is due to the detailed threat modeling, discussion, and walkthroughs with the core development team. In addition, detailed security assessments have been performed on EOSIO and that also contributed to a lack of findings, but we recommended to the project team that they continue to monitor bug fixes on the EOSIO core and incorporate those updates and bug fixes.
In this code assessment, we performed the following tasks:
The review for this project was performed using manual methods and utilizing the experience of the reviewer. No dynamic testing was performed, only the use of custom-built scripts and tools were used to assist the reviewer during the testing.
Code Safety
We analyzed the provided code, checking for issues related to the following categories:
Cryptography
We analyzed the cryptographic primitives and components as well as their implementation. We checked in particular:
Technical Specification Matching
We analyzed the provided documentation and checked that the code matches the specification. We checked for things such as:
As a result of this assessment, we did not find any critical shortcomings in the reviewed components.
Metallicus quickly patched all the problems we identified and let us review their changes to confirm their effectiveness.
Notice that we did not find any evidence of malicious intent, flawed logic or potential backdoors in the codebase.
We would like to thank Metallicus, Inc. for trusting us, for their availability and the pleasant collaboration throughout the assessment!
This blog post is about benchmarking the IoT prototype we have been building for the FENTEC project, using functional encryption. For more information, please see the two previous blog posts we wrote on this topic:
The prototype is composed of three elements: a camera, a gateway, and a backend. This prototype is based on ffmpeg and is implemented as 3 bitstream filters, one for each element in the system.
The camera captures and encodes video in H.264 format. It extracts the motion vectors of the H.264-encoded stream and encrypts them using functional encryption thanks to the CiFEr library. Those encrypted motion vectors are then bundled as side data, alongside the AES encrypted video stream. This is possible because motion vectors are stored in H.264 Network Abstraction Layer (NAL) units of type H264_NAL_SEI and the image data of the video itself is stored in NAL units of type H264_NAL_SLICE and H264_NAL_IDR_SLICE. These different NAL units can therefore be encrypted differently without causing any problems. The SEI messages are additional messages that can carry any data format and must not be video-related. Therefore, we store the functionally encrypted motion vectors in these. The video image data is stored in the 2 other types of NAL units mentioned above. Symmetrically encrypting NAL units of that type is enough to make the video unreadable by anyone not possessing the encryption key.
The gateway uses the corresponding functional encryption key to evaluate whether there is motion or not within a group a video frames. For each group of frames, if there is motion, the gateway forwards the symmetrically encrypted group of frames to the backend. Thus, the amount of data transmitted over the wire is reduced to the interesting segments, where there is motion in the video. Since the video stream is encrypted, the gateway cannot determine anything about the image data inside the video, and therefore does not need to be trusted. The untrusted gateway can perform the computationally intensive motion detection on behalf of the potentially low-powered camera.
Unlike the gateway, the backend can decrypt the received AES encrypted video stream because it knows the symmetric encryption key. The decrypted video stream contains only frames of the original video where motion was detected by the gateway. The backend is able to play the video.
We measure how the number of motion vectors used affects the number of frames that can be processed per second on the camera side and on the gateway side. We also measure how turning on functional encryption of the motion vectors affects performance. In any given frame, there are a certain number of motion vectors. We do not need to use all of them but we are interested in knowing what is the performance impact of using a certain number of these.
We measure how the additional side-data size varies as the number of motion vectors used changes. Additionally, does functional encryption of the side-data have an impact on the output video sent to the gateway?
The gateway removes the segments in the video stream where no significant movement occurs. A segment is a group of pictures (GOP). We define a threshold maximum value for the sum of the motion vector norms and call it the GOP threshold. If the computed value exceeds the GOP threshold, the gateway considers that movement is detected. In that case, the segment is forwarded to the backend. Otherwise, it is removed. We measure the size of the stream received by the backend and compare it to the size of the original input video produced by the camera to see how much traffic can be spared.
The benchmarks were run on a single machine, running all three elements of the system (camera, gateway and backend). This machine is a five year old desktop computer with an Intel i7-6700k CPU clocked at 4.0 GHz, with 4 cores and 8 threads. It also has 32 GB of memory.
All measures were performed with a pre-recorded 1080p H.264 video at 30 frames per second as input for the camera.
Figure 1 shows the number of frames per second for a given number of motion vectors used, on the camera and on the gateway, and encrypted motion vectors.
We can see that the camera outputs more frames per second for smaller numbers of motion vectors (up to 95). Then, the gateway and the camera appear to be able to output similar FPS rates, for larger numbers of motion vectors.
The camera is still able to deliver a stable 30 frames per second with 40 motion vectors. The gateway drops below 30 frames per second if the number of motion vectors is greater than 13.
With a small, but large enough number of motion vectors of 3, the camera outputs 233 FPS and the gateway is able to output 67 FPS. Therefore, it would even be possible to process a 60 frame per second video in real-time without any slowdown, with encryption of the motion vectors.

The results of the same measures, but with motion vector encryption disabled are shown in Figure 2.

When motion vector encryption is disabled, the camera and gateway do not seem to be affected by the number of motion vectors and deliver a stable performance of 419 FPS on average.
Motion vectors compose the side-data sent alongside the video stream from the camera to the gateway. The overhead size added to the overall stream for a given number of motion vectors used is shown in Figure 3.

We see that for a small number of motion vectors (1 to 10), when encryption is disabled, the overhead size is roughly 10%, and in the 10%-20% range with encryption. This is still acceptable. However, as the number of motion vectors grow to greater numbers, such as 1500, the overhead without encryption is 41%, and 1431% with encryption. It is therefore suggested to use a small number of motion vectors to minimize the overhead. This is not a problem since motion detection can be performed properly with only 3 motion vectors. With 3 motion vectors, the overhead is only 13% with encryption, and 9% without encryption. Thus, encryption adds very little overhead with a small number of motion vectors.
It was empirically observed that, as soon as the backend system has started playing the gateway stream, there is no significant delay other than the one due to network latency. If the number of motion vectors is increased, it may happen that the camera or the gateway become unable to process the stream fast enough. Indeed, since the source video is 30 frames per second, if the gateway cannot process frames at that rate, then the backend system will receive frames slower than the video playback speed. When that happens, the video may play slower than expected. As previously shown in Figure 1 above, we have seen that the maximum value for the number of motion vectors, for which processing can happen at least at 30 frames per second, is 40 for the camera and 13 for the gateway. However, only 3 motion vectors are sufficient for proper movement detection. There is therefore plenty of room for processing videos with higher frame rates.
We used a 10MB video as input and measured the size of the output video sent to the backend with various values for the Group of Picture (GOP) threshold. The same measures were performed with 3, 6 and 9 motion vectors. The results are shown in Figure 4.

As expected, the greater the number of motion vectors used, the greater the GOP threshold is required for the output stream size to start dropping. Indeed, since the sum of the norms of the motion vectors for a GOP is compared to the GOP threshold to decide whether to forward that GOP to the backend, the result simply confirms that motion detection is performed.
The lines of the plot have another use, however. They clearly show the minimum GOP threshold that should be used so that movement is detected. For 3 motion vectors, the threshold should be set to at least 70. For 6 motion vectors, the threshold should be greater than 105. Finally, for 9 motion vectors, the GOP threshold should have a value greater than 125 for motion to be detected.
This also confirms that motion is properly detected with as little as 3 motion vectors.
Encryption of the motion vectors was successfully added to the prototype. We have shown that such encryption adds little overhead with small numbers of motion vectors and allows to stream a 30 fps 1080p video in real time. There is even room left for streaming up to 60 frames per second without slowdowns according to our measures.
Tangem provides "smart banknotes for digital assets", as smart card storage media for Bitcoin private keys with basic wallet functionality. Tangem hired Kudelski Security to perform a security audit of the source code written by Tangem to offer these features.
We identified a number of security risks, and then ensured they have been appropriately mitigated by the Tangem engineers. We believe that these countermeasures provide adequate defenses against counterfeiting and cloning of cards, and against theft of blockchain assets. Our work covered the internal logic of the cards as defined by the source code, but we did not assess the card's security against physical attacks (the card includes a number of protections, including those provided by EAL6+ components).
In particular, we didn't find any backdoor, malicious or suspicious undocumented feature in the firmware. In order to ‘freeze’ the audited code and exclude further modifications, we compiled the firmware v.1.28 and then stored a copy of the resulting binary fingerprint. This fingerprint can now be embedded into users’ host (NFC) applications to verify the integrity of the firmware in each banknote that they hold.
The full audit report is not published, because it contains numerous references to proprietary information, such as snippets of the firmware source code. We thank Tangem for trusting us and for organizing the logistics for the binary integrity verification.