
This advisory was written by Travis Holland and Eric Dodge of the Kudelski Security Threat Detection & Research Team
Incontroller/Pipedream is a collection of sophisticated tools thought to be created by group dubbed “Chernovite” by Dragos. Chernovite is assessed to be a a state-sponsored adversary, with the intention for use in future operations. The primary focus for this toolkit is for use in the electric and natural gas verticals; however, it is not limited to solely those. At this time, the CFC has no intelligence that Pipedream has been successfully deployed in the wild at this time. This has provided researchers time to evaluate the tools proactively. This is a suite of utilities designed to allow for access to and manipulation of Schneider Electric and Omron PLCs, as well as Open Platform Communications (OPC) Unified Architecture OPC-UA servers. Dragos, an ICS focused cyber security company, has broken Incontroller/Pipedream into five categories: Evilscholar, Badomen, Mousehole, Dusttunnel and Lazycargo.
When properly used these tools allow for an attacked to scan for devices, brute force passwords, close connections, and even crash the targeted device. PLC implants are utilized to execute untrusted code from the PLCs, these implants could be on an impacted PLC for long durations, requiring firmware forensic analysis to reveal its presence.
The CFC has worked with its ICS-aware Network intrusion Detection System (IDS) partner, Claroty, who has written and published detection signature for PipeDream. All clients of the CFC’s MDR for O.T have had these signatures updated for their Claroty deployments.
This impacts the following systems typically located in electrical substations and communicating through IEC-104 protocol:
Incontroller/Pipedream is a sophisticated and modular set of tools that an attacker can leverage once they have established access within an environment. The foothold is established by any vector available to the attacker and is followed up with utilization of the ASRock driver exploit (CVE-2020-15368) to further escalate their privileges, and to move through the environment. The ASRock exploit is rather trivial, and only requires administrative access to further escalate privileges and execute arbitrary code with kernel privileges.
The modular architecture and automation of the tool allows for easy addition of more components as needed (such the ASRock exploit) could easily be swapped in favor of another exploit or tool. Depending on the PLC type there are different actions and objectives that the threat actor would look to achieve.
Schneider Electric Devices:
Omron devices:
OPC UA:
Currently Known Indicators of Compromise (IOCs)
There is currently no evidence of Incontroller/PipeDream being deployed for disruptive or destructive effects. It is known to utilize standard ICS protocols and actions to live off the land natively. Proper monitoring of any suspicious use of the ASRock driver can help mitigate a portion of the toolset seen within Incontroller/PipeDream. It is important to note that utilization of the AsRock Driver exploit requires the attacker to already have administrator level privileges on the host, however, future exploits may have different requirements.
The Cyber Fusion Center recommends the following for mitigation, discovery, and recovery:
Additionally dedicated ICS monitoring can aid in quickly identifying things outside the baseline that could be indicative of movement and attacks within the ICS infrastructure. Examination of non-baseline activity, and restricting access to the following destination ports:
While there are currently no known active deployments of this tooling, the Cyber Fusion Center’s O.T Intrusion Detection System (IDS) partner, Claroty, has developed and published network signatures designed to detect the potential presence of this tooling. All clients of the CFC’s MDR For O.T service have had these new detection signatures deployed on their behalf.
Okta is one of the premier identity providers in the World and is trusted by thousands of customers. The recently known Lapsus$ threat actor group, that has been very active lately targeting Microsoft and Nvidia, has allegedly breached Okta’s customers environments. The group published screenshots of environments that they were able to access. The threat actor claims that they have acquired full admin access to Okta.com and they also claim that “our focus was ONLY on Okta customers”.
While Okta has confirmed that an attempt to breach Okta in late January 2022 was investigated and contained at the time, Okta has now acknowledged that after thorough investigation they have currently identified approximately 2.5% of their customers who have been impacted thus far.
Only customers of the “core” Okta product are possibly impacted, there is no impact to Auth0
customers, nor to customers leverage their HIPAA and FedRAMP certified platforms. Okta said that the impacted customers have already been contacted by email.
Finally, Okta’s investigation showed that during a five-day window of time (Jan 16-21, 2022) the threat actor had access to a third party (contractor) support engineer’s laptop. The impact is limited to the access of the support engineer. Support engineers have access to limited data like Jira tickets or list of users and can reset passwords, multi-factor authentication (MFA). But Okta confirmed that they are unable to create and delete users, neither are they able obtain those passwords or download customers databases.
If your organization is using Okta and has been notified by Okta that you are impacted, the CFC
strongly recommends contacting your incident response partner to help understand the potential extent of the attack campaign.
We also recommend quickly suspending accounts that may have had their credentials or MFA
devices reset by the threat actors prior to validating that such access has not been abused by the
threat actor.
Even if Okta has not identified that you are an impacted customer, the CFC strongly recommends
that all Okta customers take the following actions:
The CFC leverages Auth0 as a Multi-Factor and Authorization provider. Due to these events the CFC is closely working with Auth0 to ensure our internal users are not impacted. The Kudelski Security DevOps and Security Engineering team has worked with Okta to confirm that this time Auth0 platform is not known to be impacted by these events.
However, although Okta has not yet identified any suspicious activity with regards to the Auth0 platform, the Kudelski Security has worked to ensure no suspicious activity was identified with regards to user MFA devices.
Additionally, it’s important to note that the CFC does not leverage Auth0 to store internal user credentials. Auth0 is used to provide Multi-Factor Authentication and Authorization to provide access to internal CFC systems and infrastructure. This dual vendor strategy ensures that no single vendor is a single point of failure. Successful compromise of the CFC’s environment would require that a threat actor compromise both the CFC’s identity and credential provider (Azure Active Directory) and Auth0 in order to gain access to internal CFC systems or that threat actors active a “single vendor” break the glass scenario that would notify the Kudelski Security DevOps team. No such activity has been identified.
The CFC will continue to monitor the situation and will provide updates to clients as more information is available. At this time, there is no indication that the CFC’s Auth0 deployment has been affected and no indication that a threat actor has been able to reset MFA devices.
As the current situation continues to evolve, the Kudelski Security Cyber Fusion Center is
continuously adapting our response to events, intelligence, and new details being released. For
details on how the CFC is responding to newly released information, please review the following
updates.
On March 3rd, the United States Cybersecurity and Infrastructure Security Agency (CISA) updated
their catalog of known commonly exploited vulnerabilities and added 95 new entries after increased analysis of suspected Russian intrusions. The bulk of these newly added vulnerabilities appear to have been actively exploited by Russian threat actors, and as such, should be prioritized for remediation. In response to this new set of known exploited vulnerabilities, the CFC has reviewed vulnerabilities found for clients of the Kudelski Security’s Vulnerability Scanning Service, the CFC proactively updated all impacted clients with the list of known exploited vulnerabilities on their internet-exposed systems.
2. Fine Tuning of Volume Shadow Copy (VSC) Auditing for MDR For Endpoint clients
with CrowdStrike Falcon
For clients of the CFC’s MDR for Endpoints service, the CFC continues to fine tune the extra visibility on enabled to identify tampering with Windows Volume Shadow Copy (VSC) “backups”. The CFC has analyzed and reviewed all alerts generated and is working with clients for to gather additional input regarding the legitimacy of the activity observed. The CFC will await client’s feedback in order to fine tune configurations prior to enabling the VSC deletion features in order minimize disruption of any legitimate activity.
3. Analysis and Vigilance of New WMI and SMB Worm used to deploy HermetricWiper
in Ukraine
The CFC has continued to monitor information and research about the malicious software deployed against Ukraine. As part of this monitoring, the Kudelski Security Detection Engineering team analyzed the worm component named “HermeticWizzard” to ensure the CFC’s security analysis team remained informed about how destructive attacks against Ukraine were carried out. As an example of this analysis, the following diagram was created by our team describing the logic and potential indicators of compromise of this new worm component:

4. Validation of Newly Deployed Claroty Signatures for MDR for O.T Clients
For our MDR for O.T clients, on February 27th, Claroty released a new threat bundle that included new and updated detections for HermeticWiper and additional detections for newly discovered malware dubbed “SockDetour”. SockDetour is a highly stealthy malware used as a secondary implant on compromised Windows servers since at least July 2019. As we already ensured all our Claroty Continuous Threat Detection (CTD) deployments are configured to receive automatic signature updates, all MDR for O.T. clients have already benefitted from these extra detection capabilities.
5. Continuous Vigilance and Advisory Development
In addition to the previous measures, the CFC released an advisory on Cyclops Blink, a new malware that appears to be a replacement of the previously discovered and documented VPNFilter malware. While Cyclops Blink is known to only target SOHO devices from WatchGuard so far, an assessment of the malware reveals that it could also be compiled and deployed onto other architectures and SOHO networking equipment. This information leads CFC to continuously monitor this threat and its evolution in order to identify potentially infected system and provide clients with mitigation and remediation steps as soon as possible.
As communicated previously, the Kudelski Security Cyber Fusion Center is aware of and actively
monitoring the current global tensions resulting from the events surrounding Russia and Ukraine. The United States Cybersecurity and Infrastructure Security Agency (CISA) has published an advisory regarding potential Russian attempts to utilize cyber-attacks for force projection and as a response to western sanctions.
There are currently no specific threats targeting the United States, other NATO members or partner countries. However, Russian interests have recently expressed discontent with ongoing sanctions and have shown willingness to target “sensitive” assets. Additionally, the CFC is aware of several cyber-criminal groups (such as the Conti ransomware group) who have pledged to attack critical infrastructure of “Russian enemies” in the event that a cyber-attack is launched against Russia. In light of these threats and the ongoing situation with Ukraine, the Cyber Fusion Center is operating with increased vigilance and is actively monitoring for potential cyber-attack related activity as part of these increased tensions. This increased vigilance will continue until tensions ease.
Additionally, the CFC is aware of active deployment of data wipers (dubbed “HermeticWiper”) being discovered and potentially deployed in critical infrastructure within Ukraine. These wipers have also been discovered on systems of Ukrainian government contractors based in Latvia and Lithuania.
The CFC strongly recommends all clients and organizations investigate systems that may be
vulnerable to CISA’s “Known Exploited Vulnerabilities” listed here:
https://www.cisa.gov/known-exploited-vulnerabilities-catalog
The CFC will continue to monitor the situation and provide our CFC analyst team and clients any
additional technical and cyber security related insights.
1. Identified Known Exploited Vulnerabilities discovered on vulnerability scanning
client perimeters
For clients using Kudelski Security’s Vulnerability Scanning Service, the CFC has proactively
reviewed vulnerability scan results for internet-exposed systems for vulnerabilities that are known to be actively exploited, according to CISA.
The CFC has prioritized identifying vulnerabilities known to be used by Russian Threat Actors. For
clients who have known exploited vulnerabilities on their internet perimeter, the CFC has opened
cases to communicate which assets may be vulnerable and should be remediated as soon
possible.
Cyber Fusion Center strongly suggests clients who use the Kudelski Security Vulnerability Scanning service to validate their vulnerability scanning scope to ensure all internet facing assets are being properly scanned.
2. Enabling Additional visibility into wiper and ransomware technical precursors or
MDR for Endpoint clients
Based on guidance from our Detection Engineering and Incident Response organizations, the CFC is working to enable additional CrowdStrike visibility (Volume Shadow Copy – Audit) for technical precursors of ransomware across the client base. As this additional audit visibility may generate false positive CrowdStrike detections, the CFC will be investigating all volume shadow copy related activity, escalating activity believed to be suspicious, and tuning as appropriate.
The CFC will monitor for the effects of the auditing policy mentioned above, and for clients with
CrowdStrike’s Prevent module, the CFC may recommend enabling specific Crowdstrike features
that prevent the deletion of Windows “backups” (volume shadow copies). The CFC will
communicate with clients and get approval prior to enabling any preventative controls.
Note: No additional auditing is currently required for clients with Microsoft Defender for Endpoint.
3. Enabling automatic updates of Claroty threat detection signatures for MDR for O.T
clients
The CFC has worked to ensure all Claroty Continuous Threat Detection (CTD) deployments are
configured to receive automatic updates to passive Claroty threat signatures. Additionally, we’ve
worked with Claroty to confirm that the Claroty team will release additional threat signatures as
the situation evolves.
4. Continuous monitoring and vigilance
The Kudelski Security Incident Response, Detection Engineering, and Cyber Fusion Center teams
continues to monitor events and provide guidance to both our clients and the CFC.
Please note that that the CFC is working diligently to provide the best detection and response
capabilities possible during this time of heighten tension. However, some of the activities
performed in order to provide better service may lead to an increased number of security events
that need to be triaged and investigated on your behalf by the CFC.
This bulletin and guidance will be updated as the situation develops.
Sources
• https://www.cisa.gov/known-exploited-vulnerabilities-catalog
• https://www.cisa.gov/shields-up
• https://twitter.com/cisajen/status/1499496597234855940
Hello Web3/blockchain world, great job. You got people to take you seriously, trusting your projects and investing their money. You’ve sold people on your innovations, and people believe in your projects. Mission accomplished. But with great trust comes great responsibility. It’s time to learn valuable lessons from other areas that have gone before you, the most valuable is that security isn’t a task; it’s a process.
With this post, I hope to add some clarity, both for blockchain projects and security professionals who may be new to the space. This is a bit of a quick mental dump and far from being comprehensive, but I hope it’s the start of a conversation between both the blockchain and security communities.
As an outsider looking at the current state of security with blockchains, it seems as though blockchain projects don’t take security seriously. Nothing could be further from the truth. Blockchain projects take security very seriously and understand the impacts of a compromise, and as such, having a security audit has become a blockchain rite of passage. So then, if that’s the case, then why are things the way they are? We’ll get to that in a second, but let’s take a quick detour and talk about security professionals for a moment.
When experienced security professionals discover the Web3 space, they bring a lot of baggage. They look at recent attacks and assume either the project didn’t have an audit or the auditor didn’t do a good job. This perspective makes an awful lot of assumptions that other processes and procedures were in place. We’ve learned a lot about application security over the past 20 years, but those lessons learned either aren’t applied or don’t directly map to the blockchain space. So, the project may very well have had an audit, but two days after the audit was completed, they pushed vulnerable code to their project. One-shot audits can’t solve that problem.
I also get the feeling from talking with security professionals that they know that blockchain ecosystems are different, but they think they have more in common than they do. So, they may understand that Ethereum, Solana, Algorand, etc., are different, but with minimal tweaking, your expertise on one will apply to the other. This isn’t true, and there’s quite a bit of hidden complexity, especially if you are developing projects on multiple chains or cross-chain projects. Different chains have different value propositions and ways of implementing that value, and it’s easy to make simple mistakes with catastrophic consequences.
Notice I used the term “projects” instead of “companies.” This is very purposeful. Blockchains have unique communities and projects. There’s a culture, much like security communities. They have their own language and views of the world. This can be a challenge for traditional security companies. I mean, try explaining to your accounting department that someone named HODLKing40 would like to pay for an audit.
Many of these projects may have an organization behind them for initial development and launching, but the projects are meant to be owned by the community. It may also be the case that these organizations are just three people. This is an entirely different perspective than what we are used to in the enterprise security space, but it’s essential to keep in mind as you work with the community.
If I summed up the current state of blockchain security, it would be projects operating with low security maturity. Their view of security is performing a single security audit before launch. Given that these projects are being developed in full public view and used as though they were finished products, this lack of maturity is on full public display.
There was also the early perspective of, “since it uses cryptography, it must be secure.” This view fueled some of the early lack of focus on security.
Many projects are created during hackathons or as people’s side hustles. Some blockchain developers are new to development altogether and working on their very first project. It’s part of what’s exciting about the space, but these aren’t conditions ripe for security success. As a developer working for a traditional company, there are typically guardrails in place, and (hopefully) you’d be exposed to some structure, standards, and ongoing audit activities. With no previous experience, developers are left to fail in full public view.
It gets more complicated because Web3 developers need to get both blockchain and traditional security right to succeed. This is because there are traditional applications mixed in as well. Think about a web front-end for an NFT marketplace or a wallet implemented as a browser extension.
Developers may also be writing complex financial products that are quite unlike anything they’ve developed before. There are many ways to mess things up and only one way to do it right. This environment creates an instant high-value target for attackers. Then again, you can also mess things up without an attacker in the loop as well. In the blockchain space, both can have similar outcomes.
We tend to forget that we are seeing technology experiments playing out in public. We think of them as finished products because the user base is high, and there is so much money at stake. This is similar to traditional startups that operate in stealth mode, blitzscaling features into their product. Traditional startups can also exercise a low level of security maturity, but because they are developed in private, with controlled releases, their lack of security maturity isn’t on full display. It also buys them time to fix issues when identified before they are disclosed publicly.
The impacts of hacks in the blockchain space are also higher than many traditional applications. Traditional applications typically have a breadth of features and functionality. Breaches are undoubtedly bad, but most can recover, and there may be layered protections, and resolutions users can take because these traditional systems are centralized.
With blockchain systems, hacks can be irreversible. Blockchain applications and smart contracts are typically very focused on specific functionality, so a violation of that functionality means a complete compromise. Exploiting once basically exploits everywhere without needing to actually go everywhere.
The experimentation in the space isn’t constrained to the technology. Blockchain ecosystems are also experimenting with new ways to create and run organizations, leaving logistics and critical decisions up to their communities. In some cases, this means even exercising radical transparency. You may find that one of your statements of work ends up on Reddit with the user community voting on whether to go with your company or not.
Transparency is one of the great things about the blockchain space, but you can’t have both radical transparency and security. Sorry. This could only work in a world where nobody acts maliciously—for example, having all of your development and bug reporting open to the world regardless of severity. If someone points out a high severity bug directly on your public GitHub repo, it’s possible an attacker could exploit the issue before you’ve even written a fix. Given the stakes, this is a bad proposition.
In a nutshell, we need greater maturity in the space, both from blockchain and security professionals.
Security professionals can’t pretend blockchains are irrelevant. I know fights with the NFT community are fun, but we’ll have to put that aside. Part of why we are where we are is because the security community has been relatively disengaged. Let’s not continue to be the “There is no cloud, just someone else’s computer” people. That mindset didn’t work out so well for us in the past.
I also get the feeling from some that they have the perspective that if they don’t participate in security conversations on the topic, they are somehow accelerating the demise of the technology. This isn’t the case either.
There are some common themes when an emerging technology comes along. Developers of the new technology don’t implement security lessons from other disciplines, but security professionals want to implement everything we’ve learned. We need to realize that we can’t re-use the exact same approaches we’ve used with traditional enterprises. I mean, there’s no risk mitigation to losing all of your money, and scanning tools won’t solve the most significant challenges.
Treat your initial plunge as an exploratory journey. Look at different security issues that have manifested themselves in the past, be they with smart contracts or core blockchains. These projects are mostly open, so you can look at their Github issues and patches. Review vulnerability write-ups and deconstructions of previous attacks. Projects affected by a compromise will typically post detailed write-ups. It’s a start.
Blockchain developers need to understand that what they are building is laced with landmines, and every line of code is a potential hazard. As of today, it’s impossible to write bug-free software. This thought should be on every developer’s mind from the very start. Blockchain developers need to take a greater security responsibility and not just hope that any security issues are caught during a final audit. An audit should absolutely be part of the security process, but not the only part.
An important consideration is that different ecosystem layers have different threats and concerns. For example, a core blockchain has different security considerations than a developer writing an application to run on top of a chain. A centralized exchange has different concerns than a group participating in a DAO. No quick blog post is going to solve all of these issues. Specifics will have to be outlined by the communities themselves, given the differences between ecosystems, but since this is a conversation starter, here are some of my thoughts.
Security is a process, not a step, and needs to be considered from the start. One obvious place to start is with the security evaluation of the architecture of a system. An architecture that doesn’t consider security is hard to apply security measures to after the fact. Blockchain ecosystems can be complex, and it’s difficult, if not impossible, to update later.
Developers also need to evaluate threats during their development process. Call it threat modeling, threat assessment, or whatever, having developers think about what could go wrong is necessary for making sure things don’t go wrong. Developers should look at the highest impact areas in their code, such as ownership checks, transfers, minting, etc.
Threat modeling could start simply by using the core questions of the Threat Modeling Manifesto while performing development tasks.
Tools will help, but they won’t solve all of the issues. This is one point traditional and Web3 applications have in common.
The bottom line is that you’ll need security expertise to get this off the ground. If you don’t have that expertise available, you can engage a partner or consider hiring someone to focus on these issues.
Security isn’t something you finish. The entire design and development process should consider questions about risk and security. Make security and ongoing conversation. Recurring audits, either by a trusted partner, pair programming, community representative, etc., should be conducted.
Code additions, be they through dedicated developers or community contributions, should be evaluated for security scrutiny and focus on high-value functions keeping your threat model in mind.
And, of course, continue threat modeling. This should never stop.
Projects and chains should publish clear security guidance for developers on their platforms. This guidance should outline things that are considered unsafe and warn developers of potential landmines. This guidance should be followed up with other awareness activities such as webcasts, workshops, etc. Security guidance should be updated as new attack vectors are discovered. This won’t stop developers from creating vulnerabilities but may reduce the obviously dangerous mistakes.
A clear process for reporting potential vulnerabilities should be published. Details of issues, especially for critical vulnerabilities, should not be public. Code fixes should also not be made public until they’ve been applied to the running code. The goal here is to reduce the window for exploitation to a size where, once an attacker finds out, they won’t have time to exploit.
A bug bounty program can also be part of this process to entice people to disclose bugs responsibly. Offering rewards upfront is better than begging attackers to give back what they stole.
I hope this post starts some conversations and explains a bit about how we got where we are. The recommendations made here are only a simple start. There is much more work to be done.
The Web3 space is a challenging place to apply security, something that should get security professionals excited. If we do this right, there may be lessons we can apply back to traditional application security as well.
An anonymous attacker used a verification problem in the Wormhole program and 80000 wETH were pulled out of the Wormhole contract. The problem was the usage of instruction load_instruction_at in function verify_signatures of the Wormhole program. After changing the signature of a malicious message, the attacker was able to transferred from Solana tokens which were identical to legitimate tokens through the Wormhole bridge to Ethereum.
Wormhole Bridge is a bridge between blockchains, it allows for transferring assets from one blockchain to another. More precisely, it is a token bridge and a NFT bridge. Tokens are created in each chain, for example, on Ethereum they are ERC20 and on Solana they are SPL tokens. In addition, a smart contract (or program on Solana) manage each token on each chain. On Solana, the Wormhole program is deployed here. The BPF bytecode is available but also the source code is written in Rust and open-source.
Above that, Guardians manage transactions between each blockchain. Before transferring the token to another chain, They check that minted tokens were correctly generated by verifying their signature on secp256k1 curve.
In Solana, the instruction_sysvar account contains all instructions of the message of the transaction that is being processed. This allows program instructions to reference other instructions in the same transaction (https://docs.solana.com/developing/runtime-facilities/sysvars#instructions).
For Wormbridge, the verify_signatures function is called priorly to get the signed signature_set for the function
pub struct VerifySignatures<'b> {
/// Payer for account creation
pub payer: Mut<Signer<Info<'b>>>,
/// Guardian set of the signatures
pub guardian_set: GuardianSet<'b, { AccountState::Initialized }>,
/// Signature Account
pub signature_set: Mut<Signer<SignatureSet<'b, { AccountState::MaybeInitialized }>>>,
/// Instruction reflection account (special sysvar)
pub instruction_acc: Info<'b>,
}
However, the verify_signatures function used the load_instruction_at function which outputs an instruction that is derived from the input data (which is the data of the instruction sysvar account). This function does not check if the input sysvar program account is the real sysvar account. Basically, the instruction sysvar program was never checked.
let secp_ix = solana_program::sysvar::instructions::load_instruction_at(
secp_ix_index as usize,
&accs.instruction_acc.try_borrow_mut_data()?,
)
Thus, the attacker created a fake instruction sysvar account with fake data (https://solscan.io/account/2tHS1cXX2h1KBEaadprqELJ6sV9wLoaSdX68FqsrrZRd); therefore the signatures were spoofed with previously valid transferred tokens (https://solscan.io/tx/5fKWY7XyW6PTzjviTDvCTpsqgfoGAAqUs1mC6w4DZm25Ppw7fX7aWDmrnkknewyZ81qMSix3c18ZuvjoZUF34tpa). Thus, all signatures in the signature_set are marked as true which means it has all valid signatures.
for s in sig_infos {
if s.signer_index > accs.guardian_set.num_guardians() {
return Err(ProgramError::InvalidArgument.into());
}
if s.sig_index + 1 > sig_len {
return Err(ProgramError::InvalidArgument.into());
}
let key = accs.guardian_set.keys[s.signer_index as usize];
// Check key in ix
if key != secp_ixs[s.sig_index as usize].address {
return Err(ProgramError::InvalidArgument.into());
}
// Overwritten content should be zeros except double signs by the signer or harmless replays
accs.signature_set.signatures[s.signer_index as usize] = true;
}
Once a signature_set is created, the function post_vaa will check if it has enough number of signatures to reach the consensus to post a Validator Action Approval (VAA). Now the attacker has a valid VAA and can trigger an unauthorized mint to his own account.
let signature_count: usize = accs.signature_set.signatures.iter().filter(|v| **v).count();
// Calculate how many signatures are required to reach consensus. This calculation is in
// expanded form to ease auditing.
let required_consensus_count = {
let len = accs.guardian_set.keys.len();
// Fixed point number transformation with one decimal to deal with rounding.
let len = (len * 10) / 3;
// Multiplication by two to get a 2/3 quorum.
let len = len * 2;
// Division to bring number back into range.
len / 10 + 1
};
if signature_count < required_consensus_count {
return Err(PostVAAConsensusFailed.into());
}
We want to emphasize that it is very important to verify the validity of unmodified, reference-only accounts in Solana (https://docs.solana.com/developing/programming-model/accounts#verifying-validity-of-unmodified-reference-only-accounts). It is because a malicious user could create accounts with arbitrary data and then pass these accounts to the program in place of valid accounts. This attack is an example.
The attack on Wormhole is the second-largest reported hack after Poly Network (https://research.kudelskisecurity.com/2021/08/12/the-poly-network-hack-explained/). The attacker was able to steal crypto-assets worth $324 million because of just a missing check. This is again a costly lesson for all blockchain developers, especially for Solana program developers.
load_instruction_at.Our analysis tried to summarize and give a bit of context of the previous analysis reported during the first hours of the hack:
Post written by: Tuyet Duong and Sylvain Pelissier
Marinade is the “easiest way to stake Solana” and is a liquid staking protocol built on Solana where people can stake, use automated staking strategies, and receive tokens they can use to work within DeFi systems or swap back and unstake. The programs are written primarily in Rust.
For this blog, we will discuss the work executed during our security assessment for the Marinade team in 2021.
For a more in-depth overview of Marinade and its roadmap, please see Marinade’s documentation page here.
To begin, Marinade talked with us through their repository, as well as design and medium blog as displayed:
Our assessment focused on code committed as of October 15, 2021 and focused on the following objectives:
There is a focused methodology that we follow in reviewing solutions such as Marinade. Not only do we review a threat assessment of possible exploits of the system, but we conduct a review of the code, appropriate usage of the SPL, fund loss scenarios, and program authentication scenarios and components. In all situations, the Marinade solution met our requirements for an effectively implemented product, including resolving any findings we uncovered.
In the security report, we identified (1) MEDIUM, (1) LOW, and (1) INFORMATIONAL finding.
After finalizing the assessment, we verified these few initial weaknesses in the code-base, but did not find any critical fund-loss weaknesses or staking issues and the team quickly resolved any findings in the code to our satisfaction prior to deployment.
It was a pleasure working with the Marinade and are looking forward to working with them again in the future.
The full Kudelski Security report is located here: https://marinade.finance/KudelskiSecurity.pdf
Authors: Antonio de la Piedra (Kudelski Security Research Team) and Marloes Venema (Radboud University Nijmegen)
This week at Black Hat Europe 2021 we have presented our work on attacking attribute-based encryption implementations: https://www.blackhat.com/eu-21/briefings/schedule/#practical-attacks-against-attribute-based-encryption-25058.
Attribute-based encryption (ABE) provides fine-grained access control on data where the ability to decrypt a ciphertext is determined by the attributes owned by a user of the system. Hence, data can be stored by an entity that is not necessarily trusted to enforce access control.
ABE has been proposed to secure the Internet of Things and enforce authorization in Cloud systems. This is typically exemplified in the healthcare setting, where all “nurses” of the hospital “A” can only decrypt certain records whereas “doctors” of the same hospital have access to additional information about the patients.
In this type of deployment, the following parties are involved:
Typically ABE schemes are based on pairings (albeit some new schemes based on lattice assumptions have appeared in the last few years), since it is generally known that secure schemes only based on ECC assumptions (such as DDH) do not exist.
For instance, in the example below, Bob has the following attributes: “doctor”, “Mayo Clinic” and “neurology”. In this particular case, another user in the system, Alice, can encrypt a message for Bob using the following policy: “(doctor or nurse) and Mayo clinic and neurology”. Bob can then decrypt this message since using his attributes i.e. doctor, “Mayo Clinic” and “neurology”, he can satisfy the policy utilized by Alice.

Moreover, multi-authority variants of ABE exist and extend these capabilities to multiple-domain settings thus removing the requirement of having a trusted third party.
For instance, in this case both Bob and Charlie can receive attributes from two attribute authorities, the Hospital and the Insurance company authorities.

ABE can be utilized as an authorization mechanism in the Cloud as different works have proposed. In this case, data owners e.g. Alice publish:
Below, we show how ABE can be used in the Cloud depicting the general architecture of DAC -MACs [1], a highly-cited scheme:

In this case, there are two KGAs in the system: the Insurance company KGA and the Hospital KGA. Alice, is the data owner that wants to share with the user Charlie sensitive data. First, Alice
generates a symmetric encryption key that uses to encrypt a message. The message is encrypted using the following policy: ‘(doctor or nurse) and Mayo Clinic and neurology’. Using the token generation mechanism of DAC-MACs [1], Charlie can obtain the ciphertext created by Alice and obtain the content key that opens the sensitive data shares by Alice.
On the other hand, other practitioners have proposed to secure Internet of Things deployments using ABE. In this case, most works are related to the Smart City paradigm. Different types of sensing data are gathered from various sources of the city such as transportation providers and energy infrastructure with the goal of optimization. In this case, ABE can be used to enforce authorization on the collected data to different data owners for analysis. One ABE scheme provided by different open-source libraries and that focuses on IoT deployments is YCT14 [2].
Several practitioners have proposed techniques and heuristics to analyze the security of ABE schemes This year, at the CT-RSA 2021 conference [3], Venema and Alpár presented attacks against 11 ABE and MA-ABE schemes, including DAC-MACS [1] and the YJ14 scheme [4]. Further, in 2019, Herranz [5] showed that several schemes only based on elliptic curve were broken such as the YCT14 [2] scheme.
In our talk, we demonstrated the practicality of these attacks. We have implemented three
different types of the attacks:
Open-source libraries such as CHARM [6] and RABE [7] provide, among others, implementations of these schemes. We have implemented the attacks in the CHARM cryptographic library and show that the implementations of DAC-MACS [1], YJ14 [4] and YCT14 [2] schemes provided by this particular library are vulnerable to decryption attacks.
Based on the status of the schemes, we have obtained the following CVEs:
Together with our presentation, we provide a Python library implementing some of the cryptanalytic attacks of Venema and Alpár [3] against the aforementioned ABE schemes: abeattacks (available at https://pypi.org/project/abeattacks/) .
Further, we have prepared 3 Jupyter notebooks where ABE and the practical attacks against the ABE schemes are illustrated (available at https://github.com/kudelskisecurity/abeattacks/jupyter/). These notebooks can be used to learn more about the attacks in practice.
We have released a Dockerfile with everything ready at https://github.com/kudelskisecurity/abeattacks/tree/main/docker. You can follow the instructions below to see how the attacks work in practice:
$ git clone https://github.com/kudelskisecurity/abeattacks/
$ cd abeattacks/docker
$ ./build_and_run.sh
Then, open your browser at the suggested location by jupyter:

You can follow the decryption attack against DAC-MACS [1] for instance:

Finally, we have published the slides of our presentation at https://github.com/kudelskisecurity/abeattacks/tree/main/slides/.
(We use URLs to full papers in PDF if they are available).
[1] http://www.acsu.buffalo.edu/~kuiren/DACMACS.pdf
[2] https://daneshyari.com/article/preview/424591.pdf
[3] https://eprint.iacr.org/2020/460.pdf
[4] https://www.computer.org/csdl/journal/td/2014/07/06620875/13rRUIJuxpd
[5] https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9291064
[6] https://github.com/JHUISI/charm
[7] https://github.com/Fraunhofer-AISEC/rabe
Many static analysis tools exist out there for detecting security issues. These tools are a necessary part of the development lifecycle. Detecting issues is great but it’s just the first step in the process. Someone still has to remediate those issues. What if we could automatically fix them?
Semgrep is a great static analysis tool. It has a lesser-known but really neat feature in development called Autofix. This feature not only lets you detect security issues, but also automatically fix them, as long as the rule that matched is autofix-capable. Let’s see how this can be achieved with a couple examples.
Let’s assume we have the following source code in the file buffer-overflow.c:
#include <stdio.h>
#include <bsd/string.h>
int do_stuff(int len, char *b) {
if (len % 8) {
printf("mod 8\n");
}
char a[10];
printf("working...\n");
strcpy(a, b);
printf("%s", a);
return 42;
}
int main() {
char b[20] = "abc";
do_stuff(10, b);
}This code may lead to a buffer overflow. One should not use strcpy but strlcpy and pass the length of the buffer instead.
We can write a Semgrep rule to automatically fix this issue in a file named buffer-overflow.yml, using the fix attribute in our rule:
rules:
- id: buffer-overflow
patterns:
- pattern-either:
- pattern: |
char $A[$SIZE];
$...REST;
strcpy($A, $B);
fix: |
char $A[$SIZE];
$...REST
strlcpy($A, $B, $SIZE);
message: "Use of strcpy is insecure and may lead to buffer overflow. Use strlcpy instead."
languages: [ c ]
severity: ERRORNotice the use of the $...FOOBAR syntax to match every instruction between char $A[$SIZE]; and strcpy($A,$B); so that we can put it back into the replacement code.
Now we can run semgrep with the --autofix or -a flag:
$ semgrep --config buffer-overflow-fix.yml --autofix buffer-overflow.cOur source code file has successfully been fixed:
#include <stdio.h>
#include <bsd/string.h>
int do_stuff(int len, char *b) {
if (len % 8) {
printf("mod 8\n");
}
char a[10];
printf("working...\n");
strlcpy(a, b, 10);
printf("%s", a);
return 42;
}
int main() {
char b[20] = "abc";
do_stuff(10, b);
}Semgrep also supports regex replacement within a match. Suppose we have the following source code in a file named sid.rs:
fn main() {
let env = "production";
println!("env = {}", env);
let sid = "1336-something";
if env == "production" {
// Note: sid has format "level-name"
// and level is a four digit number which should never end with 6 in production!
// Use sid levels ending with 7 instead
let production_sid = "1336-foobar";
println!("production sid = {}", production_sid);
} else {
println!("sid = {}", sid);
}
}Imagine that sid values should always end with a 7 when used in production. Let’s write a Semgrep rule that automatically fixes this but only when used in production in the file sid.yml:
rules:
- id: sid
patterns:
- pattern-either:
- pattern: |
if env == "production" {
...
let $FOO = "$Y";
...
}
fix-regex:
regex: '(?P<start>[0-9]{3})(?P<last>[0-9]{1})-(?P<description>.*)'
replacement: '\g<start>7-\g<description>'
message: "Sid level should always end with a 7 in production."
languages: [ rust ]
severity: ERRORNote that we use (?P<GROUP_NAME>REGEX_PATTERN) regex syntax here so that named captured groups can be referenced by their name using \g<GROUP_NAME> syntax in the replacement text.
Now let’s run our rule on our file:
$ semgrep --config sid.yml sid.rs -aOur code is now fixed in the right place only (variable production_sid):
fn main() {
let env = "production";
println!("env = {}", env);
let sid = "1336-something";
if env == "production" {
// Note: sid has format "level-name"
// and level is a four digit number which should never end with 6 in production!
// Use sid levels ending with 7 instead
let production_sid = "1337-foobar";
println!("production sid = {}", production_sid);
} else {
println!("sid = {}", sid);
}
}Semgrep’s autofix feature can go the extra mile and prevent developers from introducing security issues in a production codebase by automatically fixing them.
A possible first step would be to instruct all developers to use pre-commit and install a pre-commit hook that runs autofix semgrep rules automatically before any commit is made. For example, one can document this in our project’s README. This, however, does not prevent anyone from not using pre-commit.
One can be even more strict and set up a CI pipeline that runs our pre-commit hook whenever a pull request is made. If the pre-commit hook changes the code, then it means someone pushed a commit without running pre-commit hooks. In such a case, one can decide to make the pipeline fail. Of course, one would only allow pull requests to be merged if the pipeline successfully completes and we would also disable directly writing to the main branch.
It’s still in the very early stages, but providing capabilities in scanning tools beyond detecting and reporting could have a notable impact on code security and speed of development. Even though it’s still early, we have seen that it is possible to do more and that automatically fixing security issues is possible today. We hope that these examples will be helpful to others too. Keep shrinking that attack surface.
As a continued extension of our decentralized partner innovation ecosystem, I am excited to announce that we have partnered with Panther Protocol to deliver increased privacy enablement as we move forward with delivery of data driven solutions within the US and the UK.
One of the core tenets of the Kudelski ecosystem has always been secrets management within chips, root of trust (RoT), protection of digital artifacts, and ensuring the safety of our customers.
Extending our partner network into the blockchain ecosystem with Panther’s privacy preserving protocol accelerates our ability to bring data marketplace, data monetization, and DeFi enabled ecosystems more quickly to market and to offer more advanced service and build capabilities.
Our first expansion of these concepts will be into the UK market where we will work with the Panther team as well as their privacy-first Web3 development partner Stelium to unlock value within data inside the UK economy.
As we develop this relationship expect some thought leadership pieces as well as some exciting technology advancement as we explore privacy-first architectural advancements in wallets, key management, and scale.
Welcome Panther to the Kudelski Partnership Network!
To All Expert Blockchain Companies, Who May Be Interested in Joining our Partner Pool…
Watch out Decentralized Finance, here comes Decentralized Partner Innovation (DEPI)!
The “Speed of Crypto” is honestly at a level none of us have seen before. Even though we employ a team with deep expertise across many cryptocurrency technologies and chains, no one organization (even one in 30+ countries) can hope to keep up with the fast-paced changes we’re experiencing.
Our business is literally on fire. We are finding more situations where we have to either scale beyond our current team to meet the needs, or augment our team with specific expertise we don’t already have.
To help us meet these needs, we have built a model (and invented yet another acronym!) to utilize experts as part of an expanded team of decentralized partners. DEPI will help us deliver world-class security capabilities and meet the ever-expanding needs of our global client base. These partner organizations and/or individual contributors are vetted and bring expertise or parallel/specific expertise to complement or enhance our abilities to help on these very specialized projects.
There are a lot of reasons that we chose to build a decentralized partner team.
First – we can’t be in every country and meet every employment obligation globally… It isn’t feasible and just doesn’t make sense. (Plus – I need to sleep every once in a while…)
Second – some of the best talent in the crypto market have done very well for themselves and do not work for ANYONE. But, while these people enjoy their independence, they also appreciate having access to a larger organization that can offer interesting and challenging projects. This becomes a win-win marriage for both parties – providing them stimulating engagements while enabling us to meet our client needs.
Third – We have high standards. We never skimp on quality. Demand for our services outstrips supply, so we look to expand our resource pool rather than cut corners to save time and be able to deliver against the growing number of projects. We verify that every partner member of this network has a high level of expertise and delivers top-of-the-line quality. In fact, we are so confident that we have done a good job screening these partners that any work that utilizes our partners is under contract with Kudelski, with the concomitant safeguards, Terms and Conditions, and logo. So, anyone engaging our services gets the deep expertise and backing of Kudelski along with the latest in cross-pillar, highly focused expertise required in these fast-moving times.
“It’s hard to find a company that knows how to do this” is something I hear, LITERALLY, every day. I believe that this model will allow Kudelski Security to be the organization that knows how to deliver as well as having the capacity to do so.
Some of the partners that have agreed to add to their teams, or supply team members, to our pool of talent are:
As we continue to grow and scale, we will continue to add to our pool of experts as needed.
Are you interested in joining our team or being a node in our decentralized pool of talent? Please contact me here!
The HACK@CHES 2021 phase I competition happened from June 17 to August 16, 2021. During the competition, a bundle was given to the participants with a set of Verilog design files of a System on Chip (SoC). The goal was to discover vulnerabilities and report them to the judges of the CTF. According to a scoring system, a number of points was attributed for each vulnerability reported. Additional points were given if an exploit was provided or if the weakness was located in the ROM. The best teams were selected to Phase II of the contest which happenned during the CHES conference.
The SoC is based on the OpenPiton processor, an open source, processor which uses CVA6 64-bit RISC-V cores. The design files were available in the bundle and it was possible to simulate them by software using Verilator simulator or using a FPGA board. The SoC implements many peripherals among them there are three AES cores namely AES0, AES1 and AES2, a TRNG core and a RSA core.
We were granted 60 points for a vulnerability discovered in the AES0 module which is the maximum number of point possible for a unique bug not located in ROM. The following details our findings and more generally how to simulate fault attacks in hardware design to reveal hardware weaknesses.

One important step during hardware design is the simulation. Since hardware is less easily patchable than software, the hardware designs are heavily tested before the tapeout. Basically, Verilator is an open-source simulator which transforms a Verilog design into a C++ program which keep track of the execution cycles of the original design. It means that it is possible to simulate the design in software and see the timing of each execution part and the output result of a module.
It was possible to emulate the full SoC in software but the simulation was very long to run. In our case we were interested by AES0 thus we simulated only this design. The design files were located in the folder /piton/design/chip/tile/ariane/src/aes0. AES0 is a peripheral implementing AES-192. The top level module is called aes_192_sed, it has as inputs a 16-byte plaintext, a 24-byte key and a start signal. As output, when the out_valid signal is high, the result contains the AES encryption results.
We have set-up a Git repository with all the files needed for the simulation. We create a simulation program simulation.cpp which is in charge to feed the inputs to the module and clock the module until we get an output. Then, using Verilator is similar to compiling with GCC:
$ verilator -cc aes_192_sed.v -f input.vc --Mdir build -o simu --exe simulation.cpp
make -C build/ -f Vaes_192_sed.mk simuThe input.vc file contains all the module need for the simulation. Then we are able to verify the AES0 simulation works properly:
$ ./build/simu
[+] Simulation with Verilator
Using key:
8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
Using plaintext:
6bc1bee22e409f96e93d7e117393172a
Resulting ciphertext:
bd334f1d6e45f25ff712a214571fa5ccSo far we are able to simulate properly the AES0 module.

The idea of fault simulation is not new, we already developed a tool called Glitchoz0r 3000 presented during R2Con 2020. The idea of this tool is to emulate a firmware using radare2 ESIL and try to inject a fault in registers or instructions at each step to see if a security mechanism can be bypassed or corrupted. Another tool called FiSim is developed by Riscure company and does similar tests using Unicorn Framework and the Capstone disassembler. Finally another tool called VerFI allowed to simulate fault in the netlist of a design but we need first to synthesis it which was not what we wanted to do. In our case, we were interested to have a pure software solution simulating faults using directly Verilog hardware designs.
Since Verilator was already available for the SoC, we based our solution on it and we created an executable we named Verilaptor.
One standard hardware attack of AES is differential fault analysis (DFA). The idea is to introduce fault at the last rounds of AES and collect the faulted ciphertext to recover the secret key. To have the attack successful, we must inject faults at the output of the AES Sbox. In the design this is implemented in the module table_lookup in the file table.v. However, to optimize the simulation execution, Verilator would not let us access internal signal of the design expect if we explicitly tell Verilator. To do so, we added the comment /*verilator public*/ after the signals definition in order to have access to those signals from our simulation program:
module table_lookup (clk, state, p0, p1, p2, p3);
input clk;
input [31:0] state;
output [31:0] p0, p1, p2, p3/*verilator public*/;
wire [7:0] b0, b1, b2, b3;In Verilaptor, we create a function which simulates a random fault after the Sbox operation during round 10:
void tick_fault_r10(std::unique_ptr<Vaes_192_sed>& top, int sbox_num, int value) {
top->clk = 0;
top->eval();
// Inject fault at output of sbox
switch (sbox_num) {
auto sbox = top->aes_192_sed->uut->r10->t0;
sbox->p3 = sbox->p3 ^ value;
...A random value is XORed to the sbox outputs. Thus when executing Verilaptor, we were able to obtain faulted ciphertexts:
$ ./simu/build/veriraptor -v
[+] Fault simulation with Verilator
Using key:
d3b80fd1a0b09cefc4d343c0a7dac0b1942ca63151a89b91
Using plaintext:
f8a53552683866603d9a7dfe5982bc6f
Getting ciphertext:
5a802cf68638a0ee341e3ee25201ae1a
5a802cf68638a0ee341e3ee25201ae1a
5a8064f6860fa0ee7d1e3ee25201ae43
5a80c0f68649a0ee5d1e3ee25201ae5a
5a802cd986384eee34193ee21a01ae1a
5a802c42863896ee349d3ee2a301ae1a
a2802cf68638a0d9341ea1e25269ae1a
87802cf68638a069341e83e25263ae1a
...Then the faulted outputs were collected in files in order to recover the AES round keys by differential fault analysis. Currently the signals to fault are hardcoded in our simulation. An interesting automation of Verilaptor would be to iterate over a list of internal signal and fault each of them. Then it would allow to simulate fault attacks against various design like RSA.
The standard DFA against AES was well documented in the past for example, Quarkslab gave a complete description of DFA when applied to White-box cryptography for all variant of AES. In our case we were attacking the 192-bit version of AES. This works basically the same as for AES-128 but the key schedule algorithm of AES-192 is a bit different. To revert it properly we need the last round key and half of the previous round key. Thus the idea is to perform the standard DFA against the last round key, the 13th. In order to do that, the corruption needs to happen during round 11. These faults are collected in a file and then feed to PhoenixAES tools from the Side-Channel Marvels which implements the DFA:
print("[+] DFA on 13th round\n")
subkey13 = phoenixAES.crack_file("tracefile_r11", verbose=0)Once the 13th round key is recovered, it would allow to revert the last AES round. Then we are able to perform again a DFA attack against the previous round to recover the 12th round key. However, there is a trick to attack the 12th round key, we are now attacking a round with a MixColumn step which was not the case for the last round key. However, PhoenixAES v0.0.4 is able to handle that for us just by passing the previous round key as an argument (Thanks @doegox for the remark and the bugfixes):
print("[+] DFA on 12th round\n")
subkey12 = phoenixAES.crack_file("tracefile_r10", lastroundkeys=[unhexlify(subkey13)], verbose=0)Thus, the full attack allows to recover the two last round keys:
$ python3 attack_hack_ches21.py
[+] DFA on 13th round
Last round key #N found:
83FD2BED375F3431BFE5939B188C895B
[+] DFA on 12th round
Round key #N-1 found:
88BAA7AAA7691AC01614D51D28A7F490
Concatenated round keys:
88BAA7AAA7691AC01614D51D28A7F49083FD2BED375F3431Finally, Stark is a really convenient tool, from round keys it reverses the key schedule algorithm and recover the secret key, the first round key:
$ ./Stark/aes_keyschedule 88BAA7AAA7691AC01614D51D28A7F49083FD2BED375F3431 11
K00: D3B80FD1A0B09CEFC4D343C0A7DAC0B1
K01: 942CA63151A89B9110AC8E00B01C12EF
K02: 74CF512FD315919E473937AF1691AC3E
K03: 933D3C4723212EA857EE7F8784FBEE19
K04: C3C2D9B6D55375887AA0F8445981D6EC
K05: 0E6FA96B8A94477249569EC49C05EB4C
K06: 1949D19A40C807764EA7AE1DC433E96F
K07: 8D6577AB11609CE7D9974518995F426E
K08: D7F8EC7313CB051C9EAE72B78FCEEE50
K09: 72BF166BEBE054053C18B8762FD3BD6A
K10: B17DCFDD3EB3218D5F424BD9B4A21FDC
K11: 88BAA7AAA7691AC01614D51D28A7F490
K12: 83FD2BED375F3431BFE5939B188C895BThe full attack is implemented in the attack.sh script of our repository. It runs the fault injection and DFA on the faulted results.
We found the approach really interesting since Verilator allows to easily test a hardware design purely in software and mount fault attacks against it. We found this approach quite generic and we think it could be adapted to tests various hardware designs to test some countermeasures developed to thwart fault attack and see if they resists in simulation before the design is deployed in the field. The same approach could be done for timing attack simulation since the simulation is cycle accurate.
Solana is a web-scale, open-source blockchain protocol that is fast, secure, and fully decentralized. The protocol introduces eight core technologies that provide the infrastructure necessary for DApps and decentralized marketplaces. Solana uses a combination of proof-of-stake (PoS) and proof of history (PoH) consensus mechanisms to improve throughput and scalability. Consequently, the network claims to support 50,000 transactions per second (TPS), making it the fastest blockchain in the world.
In this post, we will talk about Solana program security, especially some common security vulnerabilities in Solana programs. This blog assumes advanced knowledge of the Solana program library and some basic understanding of Rust.
Introduction to Solana Programing Model
Smart Contracts in Solana are written in Rust or C, and they are called Programs. Solana Programs can own accounts and modify the data of the accounts they own. Other than on-chain programs which are developed and deployed by a Solana programmer, there are several native programs, which are required to run validator nodes. One of the native programs is the System Program. This program can create new accounts, allocate account data, assign accounts to owning programs, transfer lamports from System Program owned accounts and pay transaction fees.
Program id: 11111111111111111111111111111111Instructions: SystemInstruction
Solana accounts
An account in Solana contains several fields such as owner, data, lamports, executable … which are set by the system program. If the program needs to store state between transactions, it does so by using data field of the account. Here, we provide some examples for accounts. You can see that account_1 and account_2 are owned by a token program and a stake pool program, respectively. The data of each account is therefore specified and updated by the owner program. Here is more detail on Solana accounts https://docs.solana.com/developing/programming-model/accounts.
System_Program -> account_1 -> {owner = token_program_id, lamports, executable, rent_epoch, data -> Account {owner, state, mint…} } System_Program -> account_2 -> {owner = stake_pool_program_id lamports, executable, rent_epoch, data -> Stake_pool {manager, state, staker…} }
Solana Program flow
An app interacts with a Solana cluster by sending it transactions with one or more instructions. The Solana runtime passes those instructions to programs deployed by app developers beforehand. These instructions will be executed and validated by Solana Validators.

What can go wrong?
It is important to note that the Solana transaction structure specifies a list of public keys and signatures for those keys and a sequential list of instructions that will operate over the states associated with the account keys. This helps optimize the throughput, but because of this, malicious users can input arbitrary accounts, and it is now the program’s job to protect its state and data from the malicious input accounts.
Writing a Solana program is pretty simple if you know Rust or C, and understand the Solana programming model. Here are some examples (https://docs.solana.com/developing/on-chain-programs/examples). However, writing a “SECURE” program on Solana is non-trivial. As we discussed above, the program can write anything to the data of accounts it owns. Therefore, it is program’s responsibility to validate and protect its input account data.
We want to discuss two types of validations that are important and can be exploited (that may lead to loss of funds) if the program does not validate inputs properly.
1, Account ownership validation
One of the most important validations is to check whether the owners of the input accounts are the expected owners. For example, Solana stake-pool program needs to perform the following checks for every input stake account in order to ensure that input stake accounts are owned by the Solana stake program as expected.
/// Check stake program address
fn check_stake_program(program_id: &Pubkey) -> Result<(), ProgramError> { if *program_id != stake_program::id() { msg!( "Expected stake program {}, received {}", stake_program::id(), program_id );Err(ProgramError::IncorrectProgramId) } else {Ok(()) }}
We note that without the check_stake_program, it is possible for a malicious user to pass in accounts which are owned by a malicious program. Similarly, this function checks if the input accounts are indeed owned by the Solana system program.
/// Check system program address
fn check_system_program(program_id: &Pubkey) -> Result<(), ProgramError> { if *program_id != system_program::id() { msg!( "Expected system program {}, received {}", system_program::id(), program_id );Err(ProgramError::IncorrectProgramId) } else {Ok(()) }}
2, Account state (data) validation
Missing account state (data) validation is probably one of the highest severity mistakes that developers can easily make. For example, data of a reserve associated to a particular lending market in Solana token-lending program is specified by this struct. These fields will be instantiated when the reserve is initialized.
pub struct Reserve {/// Version of the struct
pub version: u8,/// Last slot when supply and rates updated
pub last_update: LastUpdate,/// Lending market address
pub lending_market: Pubkey,/// Reserve liquidity
pub liquidity: ReserveLiquidity,/// Reserve collateral
pub collateral: ReserveCollateral,/// Reserve configuration values
pub config: ReserveConfig,}
So whenever the reserve is updated or used by a lending market, the program has to ensure that the input lending market is the one that the reserve has instantiated with. Otherwise, a malicious lending market can access the reserve and may be able to drain all funds. This check was missed in Solend, and lead to a potential loss of 2 millions (https://docs.google.com/document/d/1-WoQwT1QrPEX-r4N-fDamRQ50LM8DsdsOyq1iTabS3Q/edit#). Fortunately, the Solend team was able to detect and stop the exploitation in time such that no funds were stolen.
if &reserve.lending_market != lending_market_info.key { msg!("Reserve lending market does not match the lending market provided"); return Err(LendingError::InvalidAccountInput.into());}
Another example in Solana stake-pool program, a stake-pool is specified by this struct and these fields will be instantiated with some account pub-keys when the pool is initialized.
pub struct StakePool { pub account_type: AccountType, pub manager: Pubkey, pub staker: Pubkey, pub deposit_authority: Pubkey, pub withdraw_bump_seed: u8, pub validator_list: Pubkey, /// Reserve stake account, holds deactivated stake pub reserve_stake: Pubkey, ...
After the initialization, processes that involve the reserve stake account such as process_increase_validator_stake, process_update_validator_list_balance, process_update_validator_list_balance, and process_deposit have to check if the input reserve account is the same as the reserve_stake of the stake-pool.
pub fn check_reserve_stake(&self, reserve_stake_info: &AccountInfo, ) -> Result<(), ProgramError> { if *reserve_stake_info.key != self.reserve_stake { msg!( "Invalid reserve stake provided, expected {}, received {}", self.reserve_stake, reserve_stake_info.key ); Err(StakePoolError::InvalidProgramAddress.into()) } else { Ok(()) } }
Conclusion
These two types of mistakes are very common, easy to make, and can potentially lead to loss of funds. Therefore, it is necessary to ensure that account owners and account states are validated before deploying the programs to the Solana main chain. In future blog posts, we will discuss some more advanced concepts in Solana as well as some other security vulnerabilities.