
I recently presented work on the analysis of a file encryption solution that claimed to implement “AES-1024 military grade encryption“. Spoiler alert: I did not break AES, and this work does not concern the security of AES. You may find advanced research regarding this topic.
This project started during a forensic analysis. One of my colleagues came with a USB stick containing a vault encrypted with SanDisk Secure Access software. He asked me if it was possible to bruteforce the password of the vault to recover the content. I did not know this software thus, I started to research. It appeared that this solution is distributed by Sandisk by default on any storage device you buy from them.

The solution is convenient, it allows a user to run the binary on the disk and after entering her correct password her vault is unlocked and the files are accessible. Once the software is closed, the files are encrypted back and not accessible anymore. So far nothing uncommon, but one thing drew my attention. In the Options menu, you can choose your “Preferred encryption method“.

I could choose ( if I would bought the premium version of the software) between several methods: “AES 128 bit“, “AES 256 bit“, “AES 512 bit” and “AES 1024 bit“. It was surprising to me since AES keys are defined in the standard to be either 128, 192 or 256-bit long but not 512 nor 1024 bits. Thus I was definitely interested to know what was behind this solution.
I dug a bit deeper and I discovered that the solution is developed for SanDisk by a company called ENCSecurity. The interesting part is that the solution is general and distributed to Sony and Lexar as well under different names. ENCSecurity also sells their own paying version with more options. I was surprised that the solution was not supported yet by John the ripper nor Hascat. In addition, the security claims of their website are really strong.

They claimed to provide “Ultimate encryption using 1024 bit AES keys, Military grade”. Thus for all those reasons, I decided to analyze the solution to figure out how it was implemented.
The binary is a PE packed with UPX. PE explorer allowed me to unpack it and still run properly. In the ENCSecurity version of the software, an option allows enabling the log of the application. At the beginning of the log I had:
05:11:42.633 D utils.cpp 372 showMessage returned 0
05:11:42.633 I encmainwindow.cpp 738 Starting ENCDataVault70 version: 7.1.1W
05:11:42.633 I encmainwindow.cpp 739 Date: 28.04.2021 05:11
05:11:42.633 I encmainwindow.cpp 740 Qt library version: 5.9.6
05:11:42.638 I encmainwindow.cpp 741 OpenSSL system library: OpenSSL 1.0.2q 20 Nov 2018
05:11:42.638 I encmainwindow.cpp 742 OpenSSL version: OpenSSL 1.0.2q-fips 20 Nov 2018
05:11:42.638 I enclogging.cpp 322 Application name: "ENCDataVault70"I knew it used OpenSSL and Qt libraries and I got the version number as well. It allowed me to create a Ghidra Function ID database on the same libraries and Ghidra matched some of the function signatures in binary. It simplified a lot the analysis since all the Qt functions could be left apart since they concern the GUI and the interesting functions use OpenSSL for Cryptography algorithms.
In fact from a general point of view, I was analyzing a password hash function. The function takes as input a user password and hashes it to a key which is later used to encrypt or decrypt data. Usually, the password hash function takes as input a unique and randomly generated salt to avoid precomputed attacks like dictionary or rainbow table attacks. Another common parameter of the hash function is the iterations number which allows to adapt the work factor. The higher the iteration number is the longer it will takes to compute the hash and thus, the harder it will to bruteforce the password for an attacker. Currently the are various recommended algorithms like: PBKDF2, Scrypt or Argon2. Argon2 is the winner of the Password Hashing Competition and is now considered as the state of the art for password hashing.
For this analysis, I only needed to focus on PBKDF2. Its design is simple:

It uses a HMAC function with the user password used as the input key. The input message is simply the salt concatenated with the constant 1. In the previous figure, c is the number of iterations. At the end we obtain a key. However with the current construction, the key cannot be larger that the output of the HMAC. So how do we construct 1024-bit keys? PBKDF1 was originally designed with this limitation. However, in PBKDF2, the solution was found to iterate several time the construction but with the input message to be the salt concatenated with constant 1 to obtain the first key part key[1], then the constant 2 is used to obtain the second key part key[2] and so on … Then the derived key is simply the concatenation of all the key part obtained during the iterations.
If we go back to our binary, the key derivation function was not identified directly by Ghidra Function ID and I had to reverse it manually. Thankfully, some calls to the MD5 hash function were identified under several layers of calls. Finally, after some efforts, I was able to reverse it. Here is how it looks:

The design is similar to PBKDF2 but they used MD5 hash which is not an HMAC function. The last thing missing was the generation of the salt. I struggled hard to find where this was generated but finally, I figured out it was hardcoded in the code directly.

It looks randomly generated but it is definitively not unique since all vaults created with the software would use the same salt for the key derivation. In addition, users using the same password would end up with the same decryption key. Later I discovered that the same salt value is also shared among all the vendors: SanDisk, Sony and Lexar. A less critical problem is that the number of iterations is also fixed and set to 1000. This number of iterations was good when PBKDF2 was designed but nowadays the iteration number has to be higher. For example, OWASP recommends now 310000 iterations when PBKDF2 uses HMAC-SHA-256. Nevertheless, the construction itself of the key derivation function is flawed.

The part in red does not depend on the salt or the constant. Thus it allows making precomputation attacks even if the salt is randomly generated by precomputing the part in red and XOR the result with the salt afterward. It also reduces the complexity of generating large keys since the part in red can be reused at each iteration. This shows once again that trying to implement custom Cryptography often leads to failure. This was the first problem I reported to ENCSecurity and CVE-2021-36750 was issued for this problem. ENCSecurity later patched the problem by using OpenSSL implementation of PBKDF2-SHA256 together with a randomly generated salt and 100000 iterations.
Now that I got the key derivation function, I checked how the password was verified to be correct. In fact, a file name filesystem.dat sored in the folder C:\Users\user\AppData\Local\ENCSecurityBV\ENCDataVault70\ENCDataVault contains an encrypted magic value. When the decryption of this magic value gives 0xd2c3b4a1 then the password is considered correct. The decryption algorithm used OpenSSL AES encryption. In fact for the AES-128 option, the encryption is simply AES in CTR mode with a 128-bit key generated from the key derivation function described earlier. However for the other modes the construction is more curious.

It uses multiple AES encryptions. Two for AES-256 option, four for AES-512 and eight for AES-1024. Each time AES encryption is performed with a 128-bit key. The key parts generated previously with the key derivation function are XORed to the first eight bytes of the nonce prior to the encryption and finally, all the results are XORed together. Once the password is checked to be correct eight file encryption keys are decrypted with the same algorithms. Those keys are used to encrypt and decrypt the file stored in the vaults. This is a common way to proceed since it prevents re-encrypting all the files if a user changes her password. Only the file encryption key has to be re-encrypted in this case.
I got everything I needed to implement a John the ripper plugin that allows everybody to bruteforce AES-1024 military grade encryption! The plugin is now integrated into the main repository and also includes also the bruteforce of the new key derivation function based on HMAC-PBKDF-SHA256.
The latest point I needed to elucidate was how the files themselves are encrypted. First I thought they used the same encryption algorithm based on AES. For AES-128 is still the same method, AES in CTR mode. However, I was not able to decrypt the vault using other options. I came back to Ghidra and reversed the algorithm further. It turned out that the file decryption algorithm is different.

It appears that only two iterations of AES is performed for any other option and only the last file key, for example, key[8] for AES-1024, is used and XORed with the nonce and counter. However, the construction is closed to CTR mode of operation and thus also has the malleability property. It means that if an attacker has access to the encrypted data he can make modifications that would not be detected during the decryption and the same modification is applied to the decrypted file.
In addition, the previous construction gives only a maximum security level of 256 bits since we theoretically have to bruteforce the first file key, key[1], and the last one key[8]. But we can do more, if we assume we know that two blocks that have the corresponding plaintext equal to zero we can write the following formula.

The plaintext is not involved in the equations since it is zero. Then we can decrypt each part of the equations to have:

Finally, we XOR both equations and we obtained the following result.

There is no further dependency on the last file encryption key and thus it shows a reduction to 128 bits of security. Indeed we only need to bruteforce the first file key until we obtain the right part of the equation which is known. We made the assumption that we had found two separate blocks matching the encryption of a zero block. In fact, files are encrypted in a way that includes some zero blocks. Let’s look at the encrypted file format.

The IVs are in clear followed by the file name length. Then the file name is stored encrypted followed by zero blocks until offset 0x200 where the encrypted file content starts. Thus, our previous analysis holds for this solution and shows that we have only a level of security of 128-bits for all the encryption options. Those problems were also reported to ENCSecurity and CVE-2021-36751 was attributed to that.
This analysis shows again that it is difficult to roll a custom cryptographic algorithm and also that the level of security of a solution does not depend on the number of encryptions performed.
This bulletin was written by Yann Lehmann of the Kudelski Security Threat Detection & Research Team
According to a recent report from the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the Multi-State Information Sharing & Analysis Center (MS-ISAC) released on May 18th, exploitation of the vulnerability has been observed on both government and private sector’s network.
Several POCs of exploits have since been released to the public, making the exploitation of that vulnerability much more accessible, including to less sophisticated actors. The Cyber Fusion Center (CFC) strongly urges its clients to apply the required patches to their affected devices. If any BIG-IP’s management ports or self IPs are or were publicly exposed, the CFC recommends to consider those devices as compromised and hunt for malicious activity. CISA and MS-ISAC have provided numbers of signatures for that purpose in the “Detection Methods” of the following CyberSecurity Advisory.
iControl REST is an evolution of F5 iControl framework. Leveraging this Representational State Transfer (REST) API, an authenticated user can accomplish anything that can be accomplished from the F5 BIG-IP command line. It is an extremely powerful API.
On May 04, 2022, F5 disclosed a critical CVE, CVE-2022-1388. It may allow an unauthenticated attacker with network access to the management port or the self IP addresses of the BIG-IP system to leverage the iControl REST component. This is because some requests to iControl REST can directly bypass the authentication mechanism. Due to the capabilities of this component, anyone with network access to the management port or the self IP addresses can execute arbitrary system commands and modify services or files. From the nature of the iControl rest component, this is a control plane vulnerability that does not expose the data plane.
At the time of writing there is no publicly known exploit of that vulnerability and F5 did not disclose any details on the requests that are able to bypass the iControl REST authentication. Moreover, with a good architectural design of the BIG-IP appliances the management port and the self IP addresses should not be directly exposed without control. However, according to CFC experience, such undisclosed requests are frequently quickly reversed-engineered by security researchers and malicious actors. As such the CFC recommends mitigating the risk immediately by patching. In addition, two others less-impacting vulnerabilities, CVE-2022-26415 and CVE-2022-29474, will also be mitigated by patching.
This CVE-2022-1388 impacts only the following BIG-IP versions.
BranchVulnerable VersionFix introduced inSeverity / CVSSv3Impacted component16.x16.1.0 – 16.1.216.1.2.2Critical / 9.8iControl REST15.x15.1.0 – 15.1.515.1.5.1Critical / 9.8iControl REST14.x14.1.0 – 14.1.414.1.4.6Critical / 9.8iControl REST13.x13.1.0 – 13.1.413.1.5Critical / 9.8iControl REST12.x12.1.0 – 12.1.6Will not fixCritical / 9.8iControl REST11.x11.6.1 – 11.6.5Will not fixCritical / 9.8iControl REST
If you are running a later component that the one mentioned in the fixed column above, that version should contain the fix and you are not impacted.
F5 provided fixes for the most recent branches of BIG-IP devices. The CFC recommends immediately patching your vulnerable version. If it does not exist, the CFC recommends upgrading to a newer branch.
Until it is possible to install a fixed version, F5 provided temporary mitigations which restrict access to iControl REST to only trusted networks or devices. As this decreases the attack surface drastically, the CFC recommends applying the described mitigation steps immediately, until the BIG-IP devices have been patched.
Those mitigation include restricting or blocking iControl REST access through management interface or the self IP addresses. Another possibility to mitigate the CVE consists in modifying the httpd configuration.
Please refer to the official documentation https://support.f5.com/csp/article/K23605346#proc1 for full details on how to apply mitigations.
The CFC also recommends reviewing the audit logs with the BIG-IP appliance for any suspicious activity.
While there are currently no known exploits, the CFC is currently contacting all Security Device Management (SDM) clients on F5 to organize the patching of all their impacted appliances and ensure there are no traces of exploitation of this CVE.
On February 23rd, the UK National Cyber Security Center (NCSC) with the US Cybersecurity &
Infrastructure Security Agency (CISA) and other security agencies released information that the
threat actor group known as “Voodoo Bear” or “Sandworm” has been leveraging a modular and
fairly sophisticated implant dubbed “Cyclops Blink”.
Cyclops Blink appears to be a replacement of the previously discovered and documented VPNFilter modular implant framework, also previously leveraged by Sandworm. The VPNFilter and Cyclops Blink implants primarily target Small Office/Home Office (SOHO) network devices. The VPNFilter implant was previously used by the threat actor to redirect and manipulate traffic and infected devices were also used to maintain persistence on victim networks.
The devices infected by Cyclops Blink have been incorporated into a large-scale botnet operated
by the threat actor, which appears to have first become active as early as June 2019. As of today,
of the 1500+ impacted IPv4 that were reported, around 40% are geolocated in the United States.
In its current iteration, Cyclops Blink is highly modular and provides attackers several capabilities
(as well as writing and deploying implant modules on the fly). Cyclops Blink has been observed
primarily targeting SOHO devices from WatchGuard (Watchguard Firebox appliances). Additionally, Cyclops Blink is deployed persistently on infected WatchGuard devices by abusing the firmware upgrade mechanism.
While Cyclops Blink has only been observed on WatchGuard devices as of today, an assessment of the malware reveals that it could also be compiled and deployed onto other architectures and
firmware.
Organizations with WatchGuard firewalls should review the solution section of this advisory for
details on how to identify a potential infection and restore the system to a known good state. If the Watchguard management interface was exposed to the internet, organizations should assume the appliance has been compromised and investigate the system for signs of the implant prior to upgrading.
All Watchguard Firebox appliances are currently known vulnerable. As such organizations with
Firebox appliances must be upgraded to the latest versions for the Firebox appliances as soon as
possible. The latest Firmware for WatchGuard Firebox appliances is available for download from:
https://software.watchguard.com/
Before upgrading any appliances, it is critical to assess whether your Firebox appliance may have
been infected with Cyclops Blink. Watchguard, with the assistance of the NSA, CISA, and UK NCSC
have provided with different methods to investigate and identify a potential infection as described in https://detection.watchguard.com/
CISA and the NCSC both describe the Cyclops Blink malware as a successor to an earlier
Sandworm tool known as VPNFilter, which had infected over half a million routers before it
identified by Cisco and the FBI and dismantled in 2018.
This implant is a multi-stage, modular platform with versatile capabilities to support both
intelligence-collection and potentially destructive cyber-attack operations. It targets devices
running firmware based on Busybox and Linux and is compiled for several CPU architectures.
The first stage primary ensures persistence (via crontab) which sets it apart from other IOT
malware such as Mirai. Furthermore, it implements various redundant mechanisms to resolve the address of the second stage deployment server. The second stage once downloaded exposes the usual modules of a remotely management implant Command-and-Control including:
• File collection
• Command execution
• Data exfiltration
• Device management
However, some of the most interesting modules are implemented in a third stage and are
deployed independently as “plugins”. One such plugin implements a packet sniffer that allows
inspection of traffic and consequently theft of credentials.
The Cyclops Blink malware comes in the form of a firmware update which abuses Watchguard’s
standard firmware upgrade to install the malicious firmware. It leverages a vulnerability in the
firmware update process where the Hash-based Message Authentication Code (HMAC) can be
recalculated due to a hard-coded key in WatchGuard Firebox devices used to initialize hash
calculation. This allows persistence between reboots.
The Cyclops Blink malware has the following capabilities (most critical ones listed):
• Add a new module to Cyclops Blink.
• Update the Cyclops Blink Linux ELF executable.
• Update the list of C2 server IPv4 addresses
• Resend the current Cyclops Blink configuration to all running modules
• Gather all system information like sysinfo, /etc/passwd, /proc/mounts/, …
The full technical details are linked in the reference section.
Firmware upgrades are available and if you have a legitimate firmware running in your Fireboxes
you need to upgrade to the latest versions.
If your Fireboxes have been impacted by a malicious firmware you first need to remediate as
described in watchguard’s documentation listed below:
https://techsearch.watchguard.com/KB?type=Article&SFDCID=kA16S000000SNyiSAG&lang=en_US
To mitigate the risks until upgrading to the latest version the CFC recommends:
• Ensuring network devices’ management interfaces are not exposed to the internet.
• Ensuring strong authentication material, rotated regularly, on Firebox devices management
interface.
• Monitoring firewall management activities on Fireboxes that have not yet been updated
The CFC has created hunting campaigns and compiled IOCs to identify potential communication
with known Cyclops Blink C2 servers.
• https://www.cisa.gov/uscert/ncas/alerts/aa22-054a
• https://www.watchguard.com/wgrd-news/blog/important-detection-and-remediationactions-cyclops-blink-state-sponsored-botnet
• https://detection.watchguard.com/
https://techsearch.watchguard.com/KBtype=Article&SFDCID=kA16S000000SNyiSAG&lang=en_US
• https://www.ncsc.gov.uk/files/Cyclops-Blink-Malware-Analysis-Report.pdf
• https://www.shadowserver.org/news/shadowserver-special-reports-cyclops-blink/
• https://blog.talosintelligence.com/2018/05/VPNFilter.html
In the last few years, several practitioners have proposed zk-focused cryptographic constructions such as hashing and encryption primitives that operate on binary and prime fields and that have a low multiplicative complexity. They are also called arithmetization-oriented primitives. Some examples are the Reinforced Concrete, Rescue and MiMC hashing functions. These constructions typically target zkSNARKs and STARKs -based applications.
In this post, we focus on Ciminion, an authenticated-encryption scheme proposed by Dobraunig et al. and presented at EUROCRYPT 2021 and describe how it can be implemented using a domain specific language (DSL) for writing zkSNARKs circuits. Then, we compare our implementation with the MiMC cipher in Feistel mode (which uses a permutation that is widely used in circuits), and show how our Ciminion implementation outperforms it for a large number of plaintexts.
zkSNARKs make it possible to convince another party that a particular statement is true restricting other information about the statement to a minimum. In this case, the statement consists on a deterministic arithmetic circuit with inputs and outputs. The recent availability of domain specific languages (DSLs) for creating zk-focused applications and circuits (some examples are circom2 and Leo) allows to speed-up the creation of privacy- friendly applications.
There are certain categories of applications that can be easily described using zkSNARKs:
Finally, both zkSNARKs and STARKs are typically used in Layer-2 constructions and have been proposed in a wide range applications such as network traffic compliance and verifiable machine learning.
In this article, we focus on zkSNARKs DSLs, which provide a direct path for building privacy-friendly applications. We refer the reader to different resources for learning the specifics of zkSNARKs, together with recent advances such as PlonKup, elastic SNARKs and Nova.
Using a DSL such as circom2, one can describe a zkSNARK as an arithmetic circuit with inputs (which can be private, public or a mix of both) and outputs. The circom2 compiler transforms the circuit description into a Rank-1 Constraint System (R1CS). Other tools such as snarkjs rely on the output of the circom2 compiler to provide the verifier and prover components that can be used to build applications based on zkSNARKs.
Arithmetic operations in the circuit are always performed modulo a large prime and in the case of circom2, within the scalar field of the pairing-friendly curve BN2541 by default (whose security level was decreased to 103 bits in https://eprint.iacr.org/2019/885). However, it can be compiled to use the BLS-12-381 curve.
A zk-application might need to perform a hash operation, to encrypt a sensitive value or to derive an authentication tag of a message. Even if typical constructions such as SHA-3 and an AES modes could be implemented in a DSL, the performance in terms of number of constraints of the resulting circuits would be affected by schemes that rely on bitwise operations. For this reason, in the last few years several practitioners have proposed constructions such as the MiMC, Poseidon, and Rescue hashing algorithm and ciphers such as GMiMC and Ciminion. These constructions mainly operate on prime and binary fields and try to reduce the number of expensive arithmetic operations such as multiplications.
Ciminion is an authenticated-encryption scheme that was presented at EUROCRYPT 2021 by Dobraunig et al. It has been designed to target zero-knowledge proof applications such as zkSNARKs, STARKs and multi-party computation. It requires a limited number of field multiplication and it can be implemented over a prime field.
Ciminion reduces the amount of field multiplications by using the Toffoli gate and the Farfalle construction in order to minimize the number of rounds. In contrast, to other schemes such as MiMC that uses the power mapping f(x) = x^3, the Toffoli gate transforms a triple (a, b, c) into a triple (a, b, ab + c).
Farfalle is a permutation-based construction for a pseudorandom function (PRF) that focuses on parallelization and was designed by the Keccak Team in 2016. It typically uses a family of permutations with different number of rounds, and three layers: a mask derivation function, a compression layer (pC) and a expansion layer (pE). The Farfalle construction was designed with versatility in mind and can be transformed into an AE construction.
Ciminion receives the following input parameters: a nonce N and two subkeys (K_1 and K_2). It first applies the permutation pC to them and ouputs an intermediate state. Then, this state is transformed by the pE permutation. Two of the resulting elements are used to encrypt the first two plaintext elements P_1 and P_2. The rest of plaintext elements P_2i and P_2i+1 are processed as follows: another pair of subkey elements are added to the intermediate state. Then, the rolling function and the pE permutation are applied to obtain another two field elements that are finally used to encrypt the subsequent pair of plaintext field elements.

Both the pE and pC permutations transform a triple (a, b, c) using the round function with different number rounds:

RC are the fixed round constants. They are generated using SHAKE-256 and can be precomputed before performing the authenticated-encryption operation. Finally, the rolling function performs the Toffoli gate operation:

In Ciminion, a set of subketys needs to be generated for performing authenticated-encryption. This is performed using two master keys, MK1 and MK2 and a public IV. Ciminion expands the master key elements using a sponge construction that relies on the pC permutation:

For a given circuit written in circom2, the compiler generates the proof and a R1CS version of the circuit in the form of (A * B) – C = 0. Further, a zkSNARK implementation such as snarkjs can use either Groth16 or PLONK to verify the proof.
In circom2, each circuit is considered a component that can be imported in another circuit in a way that it is possible to generate a library of reusable gadgets such as circomlib. Constrains in the circom2 library are described using the “===” operator and by default, all the signals are private.
We refer the reader to the circom2 documentation for installation instructions and language details. We have found hardhat-circom and the circom2 syntax plugins for vscode and vim very useful.
In this section, we explain how to implement the Ciminion components in circom2.
First, we start with the smaller components and we start composing them in a bottom-up fashion til arriving at the Farfalle-like construction of Ciminion. The rolling function it is the simplest component operation of Ciminion. It just performs the Toffoli gate operation. This is a component with 3 inputs and 3 outputs, being the triple transformed the output:
pragma circom 2.0.3;
template Rolling() {
signal input s0;
signal input s1;
signal input s2;
signal output b0;
signal output b1;
signal output b2;
b0 <== s2 + s0*s1;
b1 <== s0;
b2 <== s1;
}The Ciminion round function involves the following inputs: the triple a, b, c (e.g. a_0, b_0, c_0), four round constants (e.g. RC_0, RC_1, RC_2, RC_3) and 3 outputs: a_1, b_1, c_1, that is,
the output triple after performing the transformation operation. Then, we can instantiate the permutation (round function f_i) to create the iterated permutations pC and pE:
pragma circom 2.0.3;
template Permutation() {
signal input a0;
signal input b0;
signal input c0;
signal input RC0;
signal input RC1;
signal input RC2;
signal input RC3;
signal output a1;
signal output b1;
signal output c1;
signal a0_mul_b0_plus_c0_plus_b0;
a0_mul_b0_plus_c0_plus_b0 <== a0*b0 + c0 + b0;
a1 <== c0 + a0*b0 + RC2;
b1 <== a0 + RC3*(a0_mul_b0_plus_c0_plus_b0) + RC0;
c1 <== a0_mul_b0_plus_c0_plus_b0 + RC1;
}Once we have the permutation implemented, we only need to iterate during a specific a number of rounds and the corresponding list of round constants:
template IteratedPermutationN() {
signal input a0;
signal input b0;
signal input c0;
signal output a1;
signal output b1;
signal output c1;
var c[536] = [
292334644102394411537362212093572878140,
250282066453690133315708418387534650045,
25210252274841859825108081164557115781,
16246747660448753010909888379129524409,
131669148524554050620166690622542333908,
73395003443371763039182051680716568403,
[...]Given a number of rounds nRounds per permutation, we instantiate one Permutation component per iteration (in circom2, a value can be assigned only once to a signal). The outputs of a permutation component i are redirected to the input of a permutation component i+1 and the final triple consists of the output of the permutation component nRounds – 1:
component perm[nRounds];
for (var i=0; i<nRounds; i++) {
perm[i] = Permutation();
RC0 = c[4*i];
RC1 = c[4*i + 1];
RC2 = c[4*i + 2];
RC3 = c[4*i + 3];
perm[i].RC0 <== RC0;
perm[i].RC1 <== RC1;
perm[i].RC2 <== RC2;
perm[i].RC3 <== RC3;
if (i == 0) {
perm[i].a0 <== a0;
perm[i].b0 <== b0;
perm[i].c0 <== c0;
} else {
perm[i].a0 <== perm[i-1].a1;
perm[i].b0 <== perm[i-1].b1;
perm[i].c0 <== perm[i-1].c1;
}
}
a1 <== perm[nRounds - 1].a1;
b1 <== perm[nRounds - 1].b1;
c1 <== perm[nRounds - 1].c1;
}The key generation operation in Ciminion takes as input the IV and two master keys. It iterates the permutation pC (permutation N in the Ciminion reference code) on this input in order to generate the subkeys K_i. The output corresponds to the first value of the output triple (a1, b1, c1), that is a_i for subkey K_i. We can generate a component that is instantiate according to the number of sub keys e.g. nKeys:
pragma circom 2.0.3;
include "permutation.circom";
template KeySchedule(nKeys) {
signal input s0;
signal input s1;
signal input s2;
signal output keys[nKeys];
component p_n[nKeys];
for (var i=0; i<nKeys; i++) {
p_n[i] = IteratedPermutationN();
if (i == 0) {
p_n[i].a0 <== s0;
p_n[i].b0 <== s1;
p_n[i].c0 <== s2;
} else {
p_n[i].a0 <== p_n[i-1].a1;
p_n[i].b0 <== p_n[i-1].b1;
p_n[i].c0 <== p_n[i-1].c1;
}
keys[i] <== p_n[i].a1;
}
}Finally, we have all the components to generate the authenticated-encryption
primitive.
This component will use:
We receive as inputs: the nonce and the two master key elements, nPairs
of plaintexts and we output: a TAG element that authenticates nPairs
of ciphertexts:
template CiminionEnc(nPairs) {
var nSubKeys = 2*nPairs + 3;
signal input MK_0;
signal input MK_1;
signal input nonce;
signal input PT[nPairs*2];
signal output CT[nPairs*2];
signal output TAG; The first part of the encryption operation starts the key schedule:
component key_schedule = KeySchedule(nSubKeys);
key_schedule.s0 <== 1;
key_schedule.s1 <== MK_0;
key_schedule.s2 <== MK_1; Then, the permutation pC is applied to the supplied nonce and to the first pair of subkeys.
Then, for each pair of plaintexts, the rolling and pE permutation are applied:
for (var i = 0; i < nPairs; i++) {
// roll
rolls[i] = Rolling();
if (i == 0) {
rolls[i].s0 <== p_n.a1 + key_schedule.keys[2*i + 4];
rolls[i].s1 <== p_n.b1 + key_schedule.keys[2*i + 3];
rolls[i].s2 <== p_n.c1;
} else {
rolls[i].s0 <== rolls[i-1].b0 + key_schedule.keys[2*i + 4];
rolls[i].s1 <== rolls[i-1].b1 + key_schedule.keys[2*i + 3];
rolls[i].s2 <== rolls[i-1].b2;
}
// second permutation pR
p_rs[i] = IteratedPermutationR();
p_rs[i].a0 <== rolls[i].b0;
p_rs[i].b0 <== rolls[i].b1;
p_rs[i].c0 <== rolls[i].b2;Afterwards, ciphertext pairs are generated and the MAC is updated:
CT[2*i] <== p_rs[i].a1 + PT[2*i];
CT[2*i + 1] <== p_rs[i].b1 + PT[2*i + 1];
// MAC generation
if (i == 0) {
acc_1[i] <== CT[2*i] * key_schedule.keys[0];
acc_2[i] <== acc_1[i] + CT[2*i + 1];
MAC[i] <== acc_2[i] * key_schedule.keys[0];
} else {
acc_1[i] <== (MAC[i-1] + CT[2*i]) * key_schedule.keys[0];
acc_2[i] <== acc_1[i] + CT[2*i + 1];
MAC[i] <== acc_2[i] * key_schedule.keys[0];
}
}The output, corresponds to the ciphertext pairs and the tag:
component p_r2 = IteratedPermutationR();
p_r2.a0 <== inner_upper_branches_0;
p_r2.b0 <== inner_upper_branches_1;
p_r2.c0 <== inner_upper_branches_2;
TAG <== MAC[nPairs - 1] + p_r2.a1;circom2 allows to debug and test a circuit implementation via Mocha. In this section, we show how we could test the Ciminion implementation that we have created. First, we’ll need to declare a main component in the circuit description that we want to test. We start with the Ciminion rolling function:
component main = Rolling();Via the ffjavascript package we can perform finite field arithmetic operations within the BN254 curve scalar field and can compare the results. For instance, we can generate random inputs for the Ciminion rolling function and execute the function in JavaScript:
const F1Field = require("ffjavascript").F1Field;
const Scalar = require("ffjavascript").Scalar;
const crypto = require('crypto').webcrypto;
exports.p = Scalar.fromString("21888242871839275222246405745257275088548364400416034343698204186575808495617");
const Fr = new F1Field(exports.p);
let a = new BigUint64Array(1);
let r1 = (crypto.getRandomValues(a)[0]);
let r2 = (crypto.getRandomValues(a)[0]);
let r3 = (crypto.getRandomValues(a)[0]);
const s0 = Scalar.fromString(r1);
const s1 = Scalar.fromString(r2);
const s2 = Scalar.fromString(r3);
const b0 = s2 + s0*s1;
const b1 = s0;
const b2 = s1; We obtain the witness values from the rolling circuit:
const circuit = await wasm_tester(path.join(__dirname, "circuits", "rolling.circom"));
let witness;
witness = await circuit.calculateWitness({ "s0": s0, "s1": s1, "s2":s2}, true);and compare:
await circuit.assertOut(witness, {"b0": b0});
await circuit.assertOut(witness, {"b1": b1});
await circuit.assertOut(witness, {"b2": b2});From command line, we invoke Moch:
$ mocha rolling.js
Rolling function
✔ Conformance test (47ms)
1 passing (52ms)Finally, we can also test that encryption was performed correctly by decrypting the result. In that case, we can first write a circuit that decrypts a set of ciphertexts and call both circuits from mocha.
In order to reverse the encryption operation, we need to obtain the plaintexts as:
PT[2*i] <== CT[2*i] - p_rs[i].a1;
PT[2*i + 1] <== CT[2*i + 1] - p_rs[i].b1 ;We import both encryption and decryption circuits and check the authentication
tag and the decryption values:
/* Decryption test for Ciminion circuit */
const chai = require("chai");
const path = require("path");
const F1Field = require("ffjavascript").F1Field;
const Scalar = require("ffjavascript").Scalar;
exports.p = Scalar.fromString("21888242871839275222246405745257275088548364400416034343698204186575808495617");
const Fr = new F1Field(exports.p);
const wasm_tester = require("circom_tester").wasm;
const assert = chai.assert;
describe("Ciminion - encryption operation", function () {
this.timeout(100000);
it("Authentication tag check", async() => {
const ciminion_input = {
"MK_0": "0",
"MK_1": "0",
"nonce": "1",
"IV": "1",
"PT": ["21888242871839275222246405745257275088548364400416034343692024637573625142322", "21888242871839275222246405745257275088548364400416034343693920155242279081918", "21888242871839275222246405745257275088548364400416034343692024637573625142322", "21888242871839275222246405745257275088548364400416034343693920155242279081918"]
};
const ciminion_input_dec = {
"MK_0": "0",
"MK_1": "0",
"nonce": "1",
"IV": "1",
"CT": ["21592519839218542425120198614531742298033892440087867998118713380820756220718","7889798674627961413366316750795654309310714845357364960283444849787781529858","21049421697506414118152249991945313702405695293538082623907752021251428302407","5714257272097132615426035247399194719164619087183836862864633319728296164225"]
};
const circuit = await wasm_tester(path.join(__dirname, "circuits", "ciminion_enc.circom"));
const circuit_dec = await wasm_tester(path.join(__dirname, "circuits", "ciminion_dec.circom"));
let witness;
let witness_dec;
witness = await circuit.calculateWitness(ciminion_input, true);
await circuit.assertOut(witness, {"TAG": "1300322832108596540141310981879129316384285895603221372961580627161106587830"});
await circuit.assertOut(witness, {"CT": ["21592519839218542425120198614531742298033892440087867998118713380820756220718","7889798674627961413366316750795654309310714845357364960283444849787781529858","21049421697506414118152249991945313702405695293538082623907752021251428302407","5714257272097132615426035247399194719164619087183836862864633319728296164225"]});
});
it("Decryption check", async() => {
const ciminion_input = {
"MK_0": "0",
"MK_1": "0",
"nonce": "1",
"IV": "1",
"PT": ["21888242871839275222246405745257275088548364400416034343692024637573625142322", "21888242871839275222246405745257275088548364400416034343693920155242279081918", "21888242871839275222246405745257275088548364400416034343692024637573625142322", "21888242871839275222246405745257275088548364400416034343693920155242279081918"]
};
const ciminion_input_dec = {
"MK_0": "0",
"MK_1": "0",
"nonce": "1",
"IV": "1",
"CT": ["21592519839218542425120198614531742298033892440087867998118713380820756220718","7889798674627961413366316750795654309310714845357364960283444849787781529858","21049421697506414118152249991945313702405695293538082623907752021251428302407","5714257272097132615426035247399194719164619087183836862864633319728296164225"]
};
const circuit_dec = await wasm_tester(path.join(__dirname, "circuits", "ciminion_dec.circom"));
let witness_dec;
witness_dec = await circuit_dec.calculateWitness(ciminion_input_dec, true);
await circuit_dec.assertOut(witness_dec, {"PT": ["21888242871839275222246405745257275088548364400416034343692024637573625142322", "21888242871839275222246405745257275088548364400416034343693920155242279081918", "21888242871839275222246405745257275088548364400416034343692024637573625142322", "21888242871839275222246405745257275088548364400416034343693920155242279081918"]});
});
});Finally, we can also test a circuit that just recompute the MAC of a given ciphertext and import it:
it("MAC recheck", async() => {
const ciminion_input_dec = {
"MK_0": "0",
"MK_1": "0",
"nonce": "1",
"IV": "1",
"CT": ["21592519839218542425120198614531742298033892440087867998118713380820756220718","7889798674627961413366316750795654309310714845357364960283444849787781529858","21049421697506414118152249991945313702405695293538082623907752021251428302407","5714257272097132615426035247399194719164619087183836862864633319728296164225"]
};
const circuit_mac = await wasm_tester(path.join(__dirname, "circuits", "ciminion_mac.circom"));
let witness_mac;
witness_mac = await circuit_mac.calculateWitness(ciminion_input_dec, true);
await circuit_mac.assertOut(witness_mac, {"TAG": "1300322832108596540141310981879129316384285895603221372961580627161106587830"});
});We have compared the number of constraints required for implementing Ciminion with those of MiMC in encryption mode (particulary Feistel MiMC-2n/n). The MiMC construction is typically used in circom2 circuits. We see that Ciminion scales better for larger plaintexts, due to the utilization of the Farfalle construction in comparison to the MiMC construction based on the Feistel structure.

We have released the code and tests utilized in this article in our GitHub repository.
Part of this work was done during the 2nd 0xPARC Learning Group.
Notes
1The BN254 curve security level has been estimated in around 103 bits at the time of writing this post. See https://eprint.iacr.org/2022/586.
Last update: 14th July 2022
This advisory was written by Travis Holland and Eric Dodge of the Kudelski Security Threat Detection & Research Team
Incontroller/Pipedream is a collection of sophisticated tools thought to be created by group dubbed “Chernovite” by Dragos. Chernovite is assessed to be a a state-sponsored adversary, with the intention for use in future operations. The primary focus for this toolkit is for use in the electric and natural gas verticals; however, it is not limited to solely those. At this time, the CFC has no intelligence that Pipedream has been successfully deployed in the wild at this time. This has provided researchers time to evaluate the tools proactively. This is a suite of utilities designed to allow for access to and manipulation of Schneider Electric and Omron PLCs, as well as Open Platform Communications (OPC) Unified Architecture OPC-UA servers. Dragos, an ICS focused cyber security company, has broken Incontroller/Pipedream into five categories: Evilscholar, Badomen, Mousehole, Dusttunnel and Lazycargo.
When properly used these tools allow for an attacked to scan for devices, brute force passwords, close connections, and even crash the targeted device. PLC implants are utilized to execute untrusted code from the PLCs, these implants could be on an impacted PLC for long durations, requiring firmware forensic analysis to reveal its presence.
The CFC has worked with its ICS-aware Network intrusion Detection System (IDS) partner, Claroty, who has written and published detection signature for PipeDream. All clients of the CFC’s MDR for O.T have had these signatures updated for their Claroty deployments.
This impacts the following systems typically located in electrical substations and communicating through IEC-104 protocol:
Incontroller/Pipedream is a sophisticated and modular set of tools that an attacker can leverage once they have established access within an environment. The foothold is established by any vector available to the attacker and is followed up with utilization of the ASRock driver exploit (CVE-2020-15368) to further escalate their privileges, and to move through the environment. The ASRock exploit is rather trivial, and only requires administrative access to further escalate privileges and execute arbitrary code with kernel privileges.
The modular architecture and automation of the tool allows for easy addition of more components as needed (such the ASRock exploit) could easily be swapped in favor of another exploit or tool. Depending on the PLC type there are different actions and objectives that the threat actor would look to achieve.
Schneider Electric Devices:
Omron devices:
OPC UA:
Currently Known Indicators of Compromise (IOCs)
There is currently no evidence of Incontroller/PipeDream being deployed for disruptive or destructive effects. It is known to utilize standard ICS protocols and actions to live off the land natively. Proper monitoring of any suspicious use of the ASRock driver can help mitigate a portion of the toolset seen within Incontroller/PipeDream. It is important to note that utilization of the AsRock Driver exploit requires the attacker to already have administrator level privileges on the host, however, future exploits may have different requirements.
The Cyber Fusion Center recommends the following for mitigation, discovery, and recovery:
Additionally dedicated ICS monitoring can aid in quickly identifying things outside the baseline that could be indicative of movement and attacks within the ICS infrastructure. Examination of non-baseline activity, and restricting access to the following destination ports:
While there are currently no known active deployments of this tooling, the Cyber Fusion Center’s O.T Intrusion Detection System (IDS) partner, Claroty, has developed and published network signatures designed to detect the potential presence of this tooling. All clients of the CFC’s MDR For O.T service have had these new detection signatures deployed on their behalf.
Okta is one of the premier identity providers in the World and is trusted by thousands of customers. The recently known Lapsus$ threat actor group, that has been very active lately targeting Microsoft and Nvidia, has allegedly breached Okta’s customers environments. The group published screenshots of environments that they were able to access. The threat actor claims that they have acquired full admin access to Okta.com and they also claim that “our focus was ONLY on Okta customers”.
While Okta has confirmed that an attempt to breach Okta in late January 2022 was investigated and contained at the time, Okta has now acknowledged that after thorough investigation they have currently identified approximately 2.5% of their customers who have been impacted thus far.
Only customers of the “core” Okta product are possibly impacted, there is no impact to Auth0
customers, nor to customers leverage their HIPAA and FedRAMP certified platforms. Okta said that the impacted customers have already been contacted by email.
Finally, Okta’s investigation showed that during a five-day window of time (Jan 16-21, 2022) the threat actor had access to a third party (contractor) support engineer’s laptop. The impact is limited to the access of the support engineer. Support engineers have access to limited data like Jira tickets or list of users and can reset passwords, multi-factor authentication (MFA). But Okta confirmed that they are unable to create and delete users, neither are they able obtain those passwords or download customers databases.
If your organization is using Okta and has been notified by Okta that you are impacted, the CFC
strongly recommends contacting your incident response partner to help understand the potential extent of the attack campaign.
We also recommend quickly suspending accounts that may have had their credentials or MFA
devices reset by the threat actors prior to validating that such access has not been abused by the
threat actor.
Even if Okta has not identified that you are an impacted customer, the CFC strongly recommends
that all Okta customers take the following actions:
The CFC leverages Auth0 as a Multi-Factor and Authorization provider. Due to these events the CFC is closely working with Auth0 to ensure our internal users are not impacted. The Kudelski Security DevOps and Security Engineering team has worked with Okta to confirm that this time Auth0 platform is not known to be impacted by these events.
However, although Okta has not yet identified any suspicious activity with regards to the Auth0 platform, the Kudelski Security has worked to ensure no suspicious activity was identified with regards to user MFA devices.
Additionally, it’s important to note that the CFC does not leverage Auth0 to store internal user credentials. Auth0 is used to provide Multi-Factor Authentication and Authorization to provide access to internal CFC systems and infrastructure. This dual vendor strategy ensures that no single vendor is a single point of failure. Successful compromise of the CFC’s environment would require that a threat actor compromise both the CFC’s identity and credential provider (Azure Active Directory) and Auth0 in order to gain access to internal CFC systems or that threat actors active a “single vendor” break the glass scenario that would notify the Kudelski Security DevOps team. No such activity has been identified.
The CFC will continue to monitor the situation and will provide updates to clients as more information is available. At this time, there is no indication that the CFC’s Auth0 deployment has been affected and no indication that a threat actor has been able to reset MFA devices.
As the current situation continues to evolve, the Kudelski Security Cyber Fusion Center is
continuously adapting our response to events, intelligence, and new details being released. For
details on how the CFC is responding to newly released information, please review the following
updates.
On March 3rd, the United States Cybersecurity and Infrastructure Security Agency (CISA) updated
their catalog of known commonly exploited vulnerabilities and added 95 new entries after increased analysis of suspected Russian intrusions. The bulk of these newly added vulnerabilities appear to have been actively exploited by Russian threat actors, and as such, should be prioritized for remediation. In response to this new set of known exploited vulnerabilities, the CFC has reviewed vulnerabilities found for clients of the Kudelski Security’s Vulnerability Scanning Service, the CFC proactively updated all impacted clients with the list of known exploited vulnerabilities on their internet-exposed systems.
2. Fine Tuning of Volume Shadow Copy (VSC) Auditing for MDR For Endpoint clients
with CrowdStrike Falcon
For clients of the CFC’s MDR for Endpoints service, the CFC continues to fine tune the extra visibility on enabled to identify tampering with Windows Volume Shadow Copy (VSC) “backups”. The CFC has analyzed and reviewed all alerts generated and is working with clients for to gather additional input regarding the legitimacy of the activity observed. The CFC will await client’s feedback in order to fine tune configurations prior to enabling the VSC deletion features in order minimize disruption of any legitimate activity.
3. Analysis and Vigilance of New WMI and SMB Worm used to deploy HermetricWiper
in Ukraine
The CFC has continued to monitor information and research about the malicious software deployed against Ukraine. As part of this monitoring, the Kudelski Security Detection Engineering team analyzed the worm component named “HermeticWizzard” to ensure the CFC’s security analysis team remained informed about how destructive attacks against Ukraine were carried out. As an example of this analysis, the following diagram was created by our team describing the logic and potential indicators of compromise of this new worm component:

4. Validation of Newly Deployed Claroty Signatures for MDR for O.T Clients
For our MDR for O.T clients, on February 27th, Claroty released a new threat bundle that included new and updated detections for HermeticWiper and additional detections for newly discovered malware dubbed “SockDetour”. SockDetour is a highly stealthy malware used as a secondary implant on compromised Windows servers since at least July 2019. As we already ensured all our Claroty Continuous Threat Detection (CTD) deployments are configured to receive automatic signature updates, all MDR for O.T. clients have already benefitted from these extra detection capabilities.
5. Continuous Vigilance and Advisory Development
In addition to the previous measures, the CFC released an advisory on Cyclops Blink, a new malware that appears to be a replacement of the previously discovered and documented VPNFilter malware. While Cyclops Blink is known to only target SOHO devices from WatchGuard so far, an assessment of the malware reveals that it could also be compiled and deployed onto other architectures and SOHO networking equipment. This information leads CFC to continuously monitor this threat and its evolution in order to identify potentially infected system and provide clients with mitigation and remediation steps as soon as possible.
As communicated previously, the Kudelski Security Cyber Fusion Center is aware of and actively
monitoring the current global tensions resulting from the events surrounding Russia and Ukraine. The United States Cybersecurity and Infrastructure Security Agency (CISA) has published an advisory regarding potential Russian attempts to utilize cyber-attacks for force projection and as a response to western sanctions.
There are currently no specific threats targeting the United States, other NATO members or partner countries. However, Russian interests have recently expressed discontent with ongoing sanctions and have shown willingness to target “sensitive” assets. Additionally, the CFC is aware of several cyber-criminal groups (such as the Conti ransomware group) who have pledged to attack critical infrastructure of “Russian enemies” in the event that a cyber-attack is launched against Russia. In light of these threats and the ongoing situation with Ukraine, the Cyber Fusion Center is operating with increased vigilance and is actively monitoring for potential cyber-attack related activity as part of these increased tensions. This increased vigilance will continue until tensions ease.
Additionally, the CFC is aware of active deployment of data wipers (dubbed “HermeticWiper”) being discovered and potentially deployed in critical infrastructure within Ukraine. These wipers have also been discovered on systems of Ukrainian government contractors based in Latvia and Lithuania.
The CFC strongly recommends all clients and organizations investigate systems that may be
vulnerable to CISA’s “Known Exploited Vulnerabilities” listed here:
https://www.cisa.gov/known-exploited-vulnerabilities-catalog
The CFC will continue to monitor the situation and provide our CFC analyst team and clients any
additional technical and cyber security related insights.
1. Identified Known Exploited Vulnerabilities discovered on vulnerability scanning
client perimeters
For clients using Kudelski Security’s Vulnerability Scanning Service, the CFC has proactively
reviewed vulnerability scan results for internet-exposed systems for vulnerabilities that are known to be actively exploited, according to CISA.
The CFC has prioritized identifying vulnerabilities known to be used by Russian Threat Actors. For
clients who have known exploited vulnerabilities on their internet perimeter, the CFC has opened
cases to communicate which assets may be vulnerable and should be remediated as soon
possible.
Cyber Fusion Center strongly suggests clients who use the Kudelski Security Vulnerability Scanning service to validate their vulnerability scanning scope to ensure all internet facing assets are being properly scanned.
2. Enabling Additional visibility into wiper and ransomware technical precursors or
MDR for Endpoint clients
Based on guidance from our Detection Engineering and Incident Response organizations, the CFC is working to enable additional CrowdStrike visibility (Volume Shadow Copy – Audit) for technical precursors of ransomware across the client base. As this additional audit visibility may generate false positive CrowdStrike detections, the CFC will be investigating all volume shadow copy related activity, escalating activity believed to be suspicious, and tuning as appropriate.
The CFC will monitor for the effects of the auditing policy mentioned above, and for clients with
CrowdStrike’s Prevent module, the CFC may recommend enabling specific Crowdstrike features
that prevent the deletion of Windows “backups” (volume shadow copies). The CFC will
communicate with clients and get approval prior to enabling any preventative controls.
Note: No additional auditing is currently required for clients with Microsoft Defender for Endpoint.
3. Enabling automatic updates of Claroty threat detection signatures for MDR for O.T
clients
The CFC has worked to ensure all Claroty Continuous Threat Detection (CTD) deployments are
configured to receive automatic updates to passive Claroty threat signatures. Additionally, we’ve
worked with Claroty to confirm that the Claroty team will release additional threat signatures as
the situation evolves.
4. Continuous monitoring and vigilance
The Kudelski Security Incident Response, Detection Engineering, and Cyber Fusion Center teams
continues to monitor events and provide guidance to both our clients and the CFC.
Please note that that the CFC is working diligently to provide the best detection and response
capabilities possible during this time of heighten tension. However, some of the activities
performed in order to provide better service may lead to an increased number of security events
that need to be triaged and investigated on your behalf by the CFC.
This bulletin and guidance will be updated as the situation develops.
Sources
• https://www.cisa.gov/known-exploited-vulnerabilities-catalog
• https://www.cisa.gov/shields-up
• https://twitter.com/cisajen/status/1499496597234855940
Hello Web3/blockchain world, great job. You got people to take you seriously, trusting your projects and investing their money. You’ve sold people on your innovations, and people believe in your projects. Mission accomplished. But with great trust comes great responsibility. It’s time to learn valuable lessons from other areas that have gone before you, the most valuable is that security isn’t a task; it’s a process.
With this post, I hope to add some clarity, both for blockchain projects and security professionals who may be new to the space. This is a bit of a quick mental dump and far from being comprehensive, but I hope it’s the start of a conversation between both the blockchain and security communities.
As an outsider looking at the current state of security with blockchains, it seems as though blockchain projects don’t take security seriously. Nothing could be further from the truth. Blockchain projects take security very seriously and understand the impacts of a compromise, and as such, having a security audit has become a blockchain rite of passage. So then, if that’s the case, then why are things the way they are? We’ll get to that in a second, but let’s take a quick detour and talk about security professionals for a moment.
When experienced security professionals discover the Web3 space, they bring a lot of baggage. They look at recent attacks and assume either the project didn’t have an audit or the auditor didn’t do a good job. This perspective makes an awful lot of assumptions that other processes and procedures were in place. We’ve learned a lot about application security over the past 20 years, but those lessons learned either aren’t applied or don’t directly map to the blockchain space. So, the project may very well have had an audit, but two days after the audit was completed, they pushed vulnerable code to their project. One-shot audits can’t solve that problem.
I also get the feeling from talking with security professionals that they know that blockchain ecosystems are different, but they think they have more in common than they do. So, they may understand that Ethereum, Solana, Algorand, etc., are different, but with minimal tweaking, your expertise on one will apply to the other. This isn’t true, and there’s quite a bit of hidden complexity, especially if you are developing projects on multiple chains or cross-chain projects. Different chains have different value propositions and ways of implementing that value, and it’s easy to make simple mistakes with catastrophic consequences.
Notice I used the term “projects” instead of “companies.” This is very purposeful. Blockchains have unique communities and projects. There’s a culture, much like security communities. They have their own language and views of the world. This can be a challenge for traditional security companies. I mean, try explaining to your accounting department that someone named HODLKing40 would like to pay for an audit.
Many of these projects may have an organization behind them for initial development and launching, but the projects are meant to be owned by the community. It may also be the case that these organizations are just three people. This is an entirely different perspective than what we are used to in the enterprise security space, but it’s essential to keep in mind as you work with the community.
If I summed up the current state of blockchain security, it would be projects operating with low security maturity. Their view of security is performing a single security audit before launch. Given that these projects are being developed in full public view and used as though they were finished products, this lack of maturity is on full public display.
There was also the early perspective of, “since it uses cryptography, it must be secure.” This view fueled some of the early lack of focus on security.
Many projects are created during hackathons or as people’s side hustles. Some blockchain developers are new to development altogether and working on their very first project. It’s part of what’s exciting about the space, but these aren’t conditions ripe for security success. As a developer working for a traditional company, there are typically guardrails in place, and (hopefully) you’d be exposed to some structure, standards, and ongoing audit activities. With no previous experience, developers are left to fail in full public view.
It gets more complicated because Web3 developers need to get both blockchain and traditional security right to succeed. This is because there are traditional applications mixed in as well. Think about a web front-end for an NFT marketplace or a wallet implemented as a browser extension.
Developers may also be writing complex financial products that are quite unlike anything they’ve developed before. There are many ways to mess things up and only one way to do it right. This environment creates an instant high-value target for attackers. Then again, you can also mess things up without an attacker in the loop as well. In the blockchain space, both can have similar outcomes.
We tend to forget that we are seeing technology experiments playing out in public. We think of them as finished products because the user base is high, and there is so much money at stake. This is similar to traditional startups that operate in stealth mode, blitzscaling features into their product. Traditional startups can also exercise a low level of security maturity, but because they are developed in private, with controlled releases, their lack of security maturity isn’t on full display. It also buys them time to fix issues when identified before they are disclosed publicly.
The impacts of hacks in the blockchain space are also higher than many traditional applications. Traditional applications typically have a breadth of features and functionality. Breaches are undoubtedly bad, but most can recover, and there may be layered protections, and resolutions users can take because these traditional systems are centralized.
With blockchain systems, hacks can be irreversible. Blockchain applications and smart contracts are typically very focused on specific functionality, so a violation of that functionality means a complete compromise. Exploiting once basically exploits everywhere without needing to actually go everywhere.
The experimentation in the space isn’t constrained to the technology. Blockchain ecosystems are also experimenting with new ways to create and run organizations, leaving logistics and critical decisions up to their communities. In some cases, this means even exercising radical transparency. You may find that one of your statements of work ends up on Reddit with the user community voting on whether to go with your company or not.
Transparency is one of the great things about the blockchain space, but you can’t have both radical transparency and security. Sorry. This could only work in a world where nobody acts maliciously—for example, having all of your development and bug reporting open to the world regardless of severity. If someone points out a high severity bug directly on your public GitHub repo, it’s possible an attacker could exploit the issue before you’ve even written a fix. Given the stakes, this is a bad proposition.
In a nutshell, we need greater maturity in the space, both from blockchain and security professionals.
Security professionals can’t pretend blockchains are irrelevant. I know fights with the NFT community are fun, but we’ll have to put that aside. Part of why we are where we are is because the security community has been relatively disengaged. Let’s not continue to be the “There is no cloud, just someone else’s computer” people. That mindset didn’t work out so well for us in the past.
I also get the feeling from some that they have the perspective that if they don’t participate in security conversations on the topic, they are somehow accelerating the demise of the technology. This isn’t the case either.
There are some common themes when an emerging technology comes along. Developers of the new technology don’t implement security lessons from other disciplines, but security professionals want to implement everything we’ve learned. We need to realize that we can’t re-use the exact same approaches we’ve used with traditional enterprises. I mean, there’s no risk mitigation to losing all of your money, and scanning tools won’t solve the most significant challenges.
Treat your initial plunge as an exploratory journey. Look at different security issues that have manifested themselves in the past, be they with smart contracts or core blockchains. These projects are mostly open, so you can look at their Github issues and patches. Review vulnerability write-ups and deconstructions of previous attacks. Projects affected by a compromise will typically post detailed write-ups. It’s a start.
Blockchain developers need to understand that what they are building is laced with landmines, and every line of code is a potential hazard. As of today, it’s impossible to write bug-free software. This thought should be on every developer’s mind from the very start. Blockchain developers need to take a greater security responsibility and not just hope that any security issues are caught during a final audit. An audit should absolutely be part of the security process, but not the only part.
An important consideration is that different ecosystem layers have different threats and concerns. For example, a core blockchain has different security considerations than a developer writing an application to run on top of a chain. A centralized exchange has different concerns than a group participating in a DAO. No quick blog post is going to solve all of these issues. Specifics will have to be outlined by the communities themselves, given the differences between ecosystems, but since this is a conversation starter, here are some of my thoughts.
Security is a process, not a step, and needs to be considered from the start. One obvious place to start is with the security evaluation of the architecture of a system. An architecture that doesn’t consider security is hard to apply security measures to after the fact. Blockchain ecosystems can be complex, and it’s difficult, if not impossible, to update later.
Developers also need to evaluate threats during their development process. Call it threat modeling, threat assessment, or whatever, having developers think about what could go wrong is necessary for making sure things don’t go wrong. Developers should look at the highest impact areas in their code, such as ownership checks, transfers, minting, etc.
Threat modeling could start simply by using the core questions of the Threat Modeling Manifesto while performing development tasks.
Tools will help, but they won’t solve all of the issues. This is one point traditional and Web3 applications have in common.
The bottom line is that you’ll need security expertise to get this off the ground. If you don’t have that expertise available, you can engage a partner or consider hiring someone to focus on these issues.
Security isn’t something you finish. The entire design and development process should consider questions about risk and security. Make security and ongoing conversation. Recurring audits, either by a trusted partner, pair programming, community representative, etc., should be conducted.
Code additions, be they through dedicated developers or community contributions, should be evaluated for security scrutiny and focus on high-value functions keeping your threat model in mind.
And, of course, continue threat modeling. This should never stop.
Projects and chains should publish clear security guidance for developers on their platforms. This guidance should outline things that are considered unsafe and warn developers of potential landmines. This guidance should be followed up with other awareness activities such as webcasts, workshops, etc. Security guidance should be updated as new attack vectors are discovered. This won’t stop developers from creating vulnerabilities but may reduce the obviously dangerous mistakes.
A clear process for reporting potential vulnerabilities should be published. Details of issues, especially for critical vulnerabilities, should not be public. Code fixes should also not be made public until they’ve been applied to the running code. The goal here is to reduce the window for exploitation to a size where, once an attacker finds out, they won’t have time to exploit.
A bug bounty program can also be part of this process to entice people to disclose bugs responsibly. Offering rewards upfront is better than begging attackers to give back what they stole.
I hope this post starts some conversations and explains a bit about how we got where we are. The recommendations made here are only a simple start. There is much more work to be done.
The Web3 space is a challenging place to apply security, something that should get security professionals excited. If we do this right, there may be lessons we can apply back to traditional application security as well.
An anonymous attacker used a verification problem in the Wormhole program and 80000 wETH were pulled out of the Wormhole contract. The problem was the usage of instruction load_instruction_at in function verify_signatures of the Wormhole program. After changing the signature of a malicious message, the attacker was able to transferred from Solana tokens which were identical to legitimate tokens through the Wormhole bridge to Ethereum.
Wormhole Bridge is a bridge between blockchains, it allows for transferring assets from one blockchain to another. More precisely, it is a token bridge and a NFT bridge. Tokens are created in each chain, for example, on Ethereum they are ERC20 and on Solana they are SPL tokens. In addition, a smart contract (or program on Solana) manage each token on each chain. On Solana, the Wormhole program is deployed here. The BPF bytecode is available but also the source code is written in Rust and open-source.
Above that, Guardians manage transactions between each blockchain. Before transferring the token to another chain, They check that minted tokens were correctly generated by verifying their signature on secp256k1 curve.
In Solana, the instruction_sysvar account contains all instructions of the message of the transaction that is being processed. This allows program instructions to reference other instructions in the same transaction (https://docs.solana.com/developing/runtime-facilities/sysvars#instructions).
For Wormbridge, the verify_signatures function is called priorly to get the signed signature_set for the function
pub struct VerifySignatures<'b> {
/// Payer for account creation
pub payer: Mut<Signer<Info<'b>>>,
/// Guardian set of the signatures
pub guardian_set: GuardianSet<'b, { AccountState::Initialized }>,
/// Signature Account
pub signature_set: Mut<Signer<SignatureSet<'b, { AccountState::MaybeInitialized }>>>,
/// Instruction reflection account (special sysvar)
pub instruction_acc: Info<'b>,
}
However, the verify_signatures function used the load_instruction_at function which outputs an instruction that is derived from the input data (which is the data of the instruction sysvar account). This function does not check if the input sysvar program account is the real sysvar account. Basically, the instruction sysvar program was never checked.
let secp_ix = solana_program::sysvar::instructions::load_instruction_at(
secp_ix_index as usize,
&accs.instruction_acc.try_borrow_mut_data()?,
)
Thus, the attacker created a fake instruction sysvar account with fake data (https://solscan.io/account/2tHS1cXX2h1KBEaadprqELJ6sV9wLoaSdX68FqsrrZRd); therefore the signatures were spoofed with previously valid transferred tokens (https://solscan.io/tx/5fKWY7XyW6PTzjviTDvCTpsqgfoGAAqUs1mC6w4DZm25Ppw7fX7aWDmrnkknewyZ81qMSix3c18ZuvjoZUF34tpa). Thus, all signatures in the signature_set are marked as true which means it has all valid signatures.
for s in sig_infos {
if s.signer_index > accs.guardian_set.num_guardians() {
return Err(ProgramError::InvalidArgument.into());
}
if s.sig_index + 1 > sig_len {
return Err(ProgramError::InvalidArgument.into());
}
let key = accs.guardian_set.keys[s.signer_index as usize];
// Check key in ix
if key != secp_ixs[s.sig_index as usize].address {
return Err(ProgramError::InvalidArgument.into());
}
// Overwritten content should be zeros except double signs by the signer or harmless replays
accs.signature_set.signatures[s.signer_index as usize] = true;
}
Once a signature_set is created, the function post_vaa will check if it has enough number of signatures to reach the consensus to post a Validator Action Approval (VAA). Now the attacker has a valid VAA and can trigger an unauthorized mint to his own account.
let signature_count: usize = accs.signature_set.signatures.iter().filter(|v| **v).count();
// Calculate how many signatures are required to reach consensus. This calculation is in
// expanded form to ease auditing.
let required_consensus_count = {
let len = accs.guardian_set.keys.len();
// Fixed point number transformation with one decimal to deal with rounding.
let len = (len * 10) / 3;
// Multiplication by two to get a 2/3 quorum.
let len = len * 2;
// Division to bring number back into range.
len / 10 + 1
};
if signature_count < required_consensus_count {
return Err(PostVAAConsensusFailed.into());
}
We want to emphasize that it is very important to verify the validity of unmodified, reference-only accounts in Solana (https://docs.solana.com/developing/programming-model/accounts#verifying-validity-of-unmodified-reference-only-accounts). It is because a malicious user could create accounts with arbitrary data and then pass these accounts to the program in place of valid accounts. This attack is an example.
The attack on Wormhole is the second-largest reported hack after Poly Network (https://research.kudelskisecurity.com/2021/08/12/the-poly-network-hack-explained/). The attacker was able to steal crypto-assets worth $324 million because of just a missing check. This is again a costly lesson for all blockchain developers, especially for Solana program developers.
load_instruction_at.Our analysis tried to summarize and give a bit of context of the previous analysis reported during the first hours of the hack:
Post written by: Tuyet Duong and Sylvain Pelissier
Marinade is the “easiest way to stake Solana” and is a liquid staking protocol built on Solana where people can stake, use automated staking strategies, and receive tokens they can use to work within DeFi systems or swap back and unstake. The programs are written primarily in Rust.
For this blog, we will discuss the work executed during our security assessment for the Marinade team in 2021.
For a more in-depth overview of Marinade and its roadmap, please see Marinade’s documentation page here.
To begin, Marinade talked with us through their repository, as well as design and medium blog as displayed:
Our assessment focused on code committed as of October 15, 2021 and focused on the following objectives:
There is a focused methodology that we follow in reviewing solutions such as Marinade. Not only do we review a threat assessment of possible exploits of the system, but we conduct a review of the code, appropriate usage of the SPL, fund loss scenarios, and program authentication scenarios and components. In all situations, the Marinade solution met our requirements for an effectively implemented product, including resolving any findings we uncovered.
In the security report, we identified (1) MEDIUM, (1) LOW, and (1) INFORMATIONAL finding.
After finalizing the assessment, we verified these few initial weaknesses in the code-base, but did not find any critical fund-loss weaknesses or staking issues and the team quickly resolved any findings in the code to our satisfaction prior to deployment.
It was a pleasure working with the Marinade and are looking forward to working with them again in the future.
The full Kudelski Security report is located here: https://marinade.finance/KudelskiSecurity.pdf
Authors: Antonio de la Piedra (Kudelski Security Research Team) and Marloes Venema (Radboud University Nijmegen)
This week at Black Hat Europe 2021 we have presented our work on attacking attribute-based encryption implementations: https://www.blackhat.com/eu-21/briefings/schedule/#practical-attacks-against-attribute-based-encryption-25058.
Attribute-based encryption (ABE) provides fine-grained access control on data where the ability to decrypt a ciphertext is determined by the attributes owned by a user of the system. Hence, data can be stored by an entity that is not necessarily trusted to enforce access control.
ABE has been proposed to secure the Internet of Things and enforce authorization in Cloud systems. This is typically exemplified in the healthcare setting, where all “nurses” of the hospital “A” can only decrypt certain records whereas “doctors” of the same hospital have access to additional information about the patients.
In this type of deployment, the following parties are involved:
Typically ABE schemes are based on pairings (albeit some new schemes based on lattice assumptions have appeared in the last few years), since it is generally known that secure schemes only based on ECC assumptions (such as DDH) do not exist.
For instance, in the example below, Bob has the following attributes: “doctor”, “Mayo Clinic” and “neurology”. In this particular case, another user in the system, Alice, can encrypt a message for Bob using the following policy: “(doctor or nurse) and Mayo clinic and neurology”. Bob can then decrypt this message since using his attributes i.e. doctor, “Mayo Clinic” and “neurology”, he can satisfy the policy utilized by Alice.

Moreover, multi-authority variants of ABE exist and extend these capabilities to multiple-domain settings thus removing the requirement of having a trusted third party.
For instance, in this case both Bob and Charlie can receive attributes from two attribute authorities, the Hospital and the Insurance company authorities.

ABE can be utilized as an authorization mechanism in the Cloud as different works have proposed. In this case, data owners e.g. Alice publish:
Below, we show how ABE can be used in the Cloud depicting the general architecture of DAC -MACs [1], a highly-cited scheme:

In this case, there are two KGAs in the system: the Insurance company KGA and the Hospital KGA. Alice, is the data owner that wants to share with the user Charlie sensitive data. First, Alice
generates a symmetric encryption key that uses to encrypt a message. The message is encrypted using the following policy: ‘(doctor or nurse) and Mayo Clinic and neurology’. Using the token generation mechanism of DAC-MACs [1], Charlie can obtain the ciphertext created by Alice and obtain the content key that opens the sensitive data shares by Alice.
On the other hand, other practitioners have proposed to secure Internet of Things deployments using ABE. In this case, most works are related to the Smart City paradigm. Different types of sensing data are gathered from various sources of the city such as transportation providers and energy infrastructure with the goal of optimization. In this case, ABE can be used to enforce authorization on the collected data to different data owners for analysis. One ABE scheme provided by different open-source libraries and that focuses on IoT deployments is YCT14 [2].
Several practitioners have proposed techniques and heuristics to analyze the security of ABE schemes This year, at the CT-RSA 2021 conference [3], Venema and Alpár presented attacks against 11 ABE and MA-ABE schemes, including DAC-MACS [1] and the YJ14 scheme [4]. Further, in 2019, Herranz [5] showed that several schemes only based on elliptic curve were broken such as the YCT14 [2] scheme.
In our talk, we demonstrated the practicality of these attacks. We have implemented three
different types of the attacks:
Open-source libraries such as CHARM [6] and RABE [7] provide, among others, implementations of these schemes. We have implemented the attacks in the CHARM cryptographic library and show that the implementations of DAC-MACS [1], YJ14 [4] and YCT14 [2] schemes provided by this particular library are vulnerable to decryption attacks.
Based on the status of the schemes, we have obtained the following CVEs:
Together with our presentation, we provide a Python library implementing some of the cryptanalytic attacks of Venema and Alpár [3] against the aforementioned ABE schemes: abeattacks (available at https://pypi.org/project/abeattacks/) .
Further, we have prepared 3 Jupyter notebooks where ABE and the practical attacks against the ABE schemes are illustrated (available at https://github.com/kudelskisecurity/abeattacks/jupyter/). These notebooks can be used to learn more about the attacks in practice.
We have released a Dockerfile with everything ready at https://github.com/kudelskisecurity/abeattacks/tree/main/docker. You can follow the instructions below to see how the attacks work in practice:
$ git clone https://github.com/kudelskisecurity/abeattacks/
$ cd abeattacks/docker
$ ./build_and_run.sh
Then, open your browser at the suggested location by jupyter:

You can follow the decryption attack against DAC-MACS [1] for instance:

Finally, we have published the slides of our presentation at https://github.com/kudelskisecurity/abeattacks/tree/main/slides/.
(We use URLs to full papers in PDF if they are available).
[1] http://www.acsu.buffalo.edu/~kuiren/DACMACS.pdf
[2] https://daneshyari.com/article/preview/424591.pdf
[3] https://eprint.iacr.org/2020/460.pdf
[4] https://www.computer.org/csdl/journal/td/2014/07/06620875/13rRUIJuxpd
[5] https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9291064
[6] https://github.com/JHUISI/charm
[7] https://github.com/Fraunhofer-AISEC/rabe
Many static analysis tools exist out there for detecting security issues. These tools are a necessary part of the development lifecycle. Detecting issues is great but it’s just the first step in the process. Someone still has to remediate those issues. What if we could automatically fix them?
Semgrep is a great static analysis tool. It has a lesser-known but really neat feature in development called Autofix. This feature not only lets you detect security issues, but also automatically fix them, as long as the rule that matched is autofix-capable. Let’s see how this can be achieved with a couple examples.
Let’s assume we have the following source code in the file buffer-overflow.c:
#include <stdio.h>
#include <bsd/string.h>
int do_stuff(int len, char *b) {
if (len % 8) {
printf("mod 8\n");
}
char a[10];
printf("working...\n");
strcpy(a, b);
printf("%s", a);
return 42;
}
int main() {
char b[20] = "abc";
do_stuff(10, b);
}This code may lead to a buffer overflow. One should not use strcpy but strlcpy and pass the length of the buffer instead.
We can write a Semgrep rule to automatically fix this issue in a file named buffer-overflow.yml, using the fix attribute in our rule:
rules:
- id: buffer-overflow
patterns:
- pattern-either:
- pattern: |
char $A[$SIZE];
$...REST;
strcpy($A, $B);
fix: |
char $A[$SIZE];
$...REST
strlcpy($A, $B, $SIZE);
message: "Use of strcpy is insecure and may lead to buffer overflow. Use strlcpy instead."
languages: [ c ]
severity: ERRORNotice the use of the $...FOOBAR syntax to match every instruction between char $A[$SIZE]; and strcpy($A,$B); so that we can put it back into the replacement code.
Now we can run semgrep with the --autofix or -a flag:
$ semgrep --config buffer-overflow-fix.yml --autofix buffer-overflow.cOur source code file has successfully been fixed:
#include <stdio.h>
#include <bsd/string.h>
int do_stuff(int len, char *b) {
if (len % 8) {
printf("mod 8\n");
}
char a[10];
printf("working...\n");
strlcpy(a, b, 10);
printf("%s", a);
return 42;
}
int main() {
char b[20] = "abc";
do_stuff(10, b);
}Semgrep also supports regex replacement within a match. Suppose we have the following source code in a file named sid.rs:
fn main() {
let env = "production";
println!("env = {}", env);
let sid = "1336-something";
if env == "production" {
// Note: sid has format "level-name"
// and level is a four digit number which should never end with 6 in production!
// Use sid levels ending with 7 instead
let production_sid = "1336-foobar";
println!("production sid = {}", production_sid);
} else {
println!("sid = {}", sid);
}
}Imagine that sid values should always end with a 7 when used in production. Let’s write a Semgrep rule that automatically fixes this but only when used in production in the file sid.yml:
rules:
- id: sid
patterns:
- pattern-either:
- pattern: |
if env == "production" {
...
let $FOO = "$Y";
...
}
fix-regex:
regex: '(?P<start>[0-9]{3})(?P<last>[0-9]{1})-(?P<description>.*)'
replacement: '\g<start>7-\g<description>'
message: "Sid level should always end with a 7 in production."
languages: [ rust ]
severity: ERRORNote that we use (?P<GROUP_NAME>REGEX_PATTERN) regex syntax here so that named captured groups can be referenced by their name using \g<GROUP_NAME> syntax in the replacement text.
Now let’s run our rule on our file:
$ semgrep --config sid.yml sid.rs -aOur code is now fixed in the right place only (variable production_sid):
fn main() {
let env = "production";
println!("env = {}", env);
let sid = "1336-something";
if env == "production" {
// Note: sid has format "level-name"
// and level is a four digit number which should never end with 6 in production!
// Use sid levels ending with 7 instead
let production_sid = "1337-foobar";
println!("production sid = {}", production_sid);
} else {
println!("sid = {}", sid);
}
}Semgrep’s autofix feature can go the extra mile and prevent developers from introducing security issues in a production codebase by automatically fixing them.
A possible first step would be to instruct all developers to use pre-commit and install a pre-commit hook that runs autofix semgrep rules automatically before any commit is made. For example, one can document this in our project’s README. This, however, does not prevent anyone from not using pre-commit.
One can be even more strict and set up a CI pipeline that runs our pre-commit hook whenever a pull request is made. If the pre-commit hook changes the code, then it means someone pushed a commit without running pre-commit hooks. In such a case, one can decide to make the pipeline fail. Of course, one would only allow pull requests to be merged if the pipeline successfully completes and we would also disable directly writing to the main branch.
It’s still in the very early stages, but providing capabilities in scanning tools beyond detecting and reporting could have a notable impact on code security and speed of development. Even though it’s still early, we have seen that it is possible to do more and that automatically fixing security issues is possible today. We hope that these examples will be helpful to others too. Keep shrinking that attack surface.