Skip to main content

Daniel Choi

Introduction

In software development (and more commonly in the blockchain ecosystem), many often interact with Open Source Software (OSS) to either draw inspiration from or simply copy-paste. The latter case described is a very common scenario for smart contract development and many developers fork another project's code and piggy back off of another person's/team's work.

IRL

During the token craze of 2021, new tokens were being minted every hour and there have been many of these tokens that have been copied - sometimes with very minor modifications. Using another person's/team's code is always risky and could potentially introduce a "zero-day" vulnerability. An example of this would be the case of Grim Finance's exploit. Grim Finance had forked Beefy Finance's smart contracts and have made changes according to their protocol's needs. Beefy Finance had uncovered a vulnerability with their smart contracts, which allowed for a reentrancy attack. The same attack vector was what caused Grim Finance to become hacked.

Another project that had introduced a "zero-day" vulnerability was Compound Finance. In the Compound Finance code, there was a rounding error which could be used by an attacker to manipulate empty markets. The same way Compound Finance was attacked, other similar protocols that forked their code were also attacked. Some of these protocols include Onyx, Hundred Finance, Midas Captial, and Lodestar Finance (NOT to be confused with the Lodestar Ethereum 2 client!!!).

A similar case is with Meter, an unaffiliated fork of Chainbridge . Meter had introduced a vulnerability that was not present in the base Chainbridge smart contracts. Improper modifications to the code had opened the smart contract to different attack vectors. More information regarding this can be found here.

Transparency, Collaboration, Pacing and Audits

As part of an on-going effort to make the blockchain ecosystem into a safer environment for users to interact with and participate in, developers must take ownership and responsibility for their shortcomings. In many cases, projects do not make known to the developers/owners of the code base that they will be using and forking their code. This may be due to a project's desire to stay stealth, or they wish to hide the fact they explicitly copied another project's code. Whichever the case, the argument could be made that if the "forkers" had made known to the "forkee", there could have been a collaborative effort to mitigate known vulnerabilities or work together to uncover new ones. Transparency (as well as humility) is an important part in building a stronger community of developers so we can help shoulder these type of burdens.

Timing is very important when it comes to go-to-market strategies. The same execution in different times could produce significantly opposing results. During bull markets, there is a pressure to release a token (or blockchain project) as fast as humanly possible. During the rush and chaos, blunders could be made and sleep-deprived developers could make a simple but critical mistake. It is imperative to note that smart contracts (as long as not upgradeable) cannot be modified. Once a smart contract is deployed, the source code remains forever, unable to change. Thus, a smart contract should be - ideally - without possibilities for a present attack nor future ones. To get closer to this goal, getting one's smart contracts audited is paramount, especially when it will deal with money and valuable assets. Especially when re-using code and introducing new logic, these things should be audited, even if the base contracts that were forked were already audited.

Here Comes ChainSafe

ChainSafe is a blockchain development company that loves to work with the community and wants to support a conducive environment for growth and collaboration. As part of that, ChainSafe offers services such as Smart Contract Auditing, Engineering Services, and R&D Consultancy, which people can utilize for their project. ChainSafe also offers OSS dev tooling that could be used and we encourage you to engage with us in our Community. Let's work together to stay [Chain]safe.

Willem Olding

One way to think of a blockchain is as a system for verifying provable statements to alter a global state. Examples of provable statements include:

  • Proving you hold a private key corresponding to a public key by signing some data. This is how transactions are authorized to mutate the ledger.
  • A proof-of-work (PoW)
  • Proving you know the pre-image of a hash
  • Proving result of executing a known, deterministic program on known data. The blockchain executes the program itself to check the result
  • A SNARK proof of the result of executing a known program on (possibly unknown) data

To verify a proof, the blockchain performs the verification computation inside its execution and this can result in changes to global state. Other on-chain applications can treat the result as if it is truth.

Some statements have a lovely property in that they are succinct. This means it is easier to verify them than it is to prove them (PoW, SNARK). Others take the same amount of work either way (deterministic arbitrary computation).

What you might have spotted from above is that some of these are in theory provable statements but they are impractical to prove on a blockchain. For example hashing an extremely large piece of data, or checking the result of a long and complex computation.

Dispute games are multi-party protocols that, under certain economic conditions, can allow a blockchain to be convinced of a statement while only having to execute a small part of a computation when disputes arise.

One-step Fraud Proving

For statements that could be proven on-chain there is an elegant solution that can avoid expensive verification costs. In a one-step fraud proof Ashley submits a statement on-chain (e.g. the result of this computation is X) along with a bond. The bond should be greater than the expected gas cost of having the chain verify the proof.

The statement is considered pending by the chain for a length of time known as the challenge period. During this time anyone (Bellamy) can pay the gas and force the chain to perform the check in full. This will then determine if Ashley was indeed correct and update the state accordingly. If Ashley is proven to be wrong their bond is slashed and given to Bellamy.

Importantly, if no one challenges Ashley during the challenge period it is safe to assume the statement is true under the assumption that there is at least one observer who would have proven Ashely false if they could.

This approach is great for on-going protocols like rollups where it is important to keep the running cost low. It is relatively simple in its implementation and can be permissionless.

This design was used by the early versions of Optimism such that the rollup state transition was only computed on L1 in the case of a dispute. Another clever application is in the Eth-NEAR Rainbow Bridge where one-step fraud proofs are used to avoid performing expensive Ed25519 signature checks on Ethereum under regular operation. In recent months some projects such as Fuel have proposed using on-step fraud proofs to avoid performing expensive SNARK verification unless there is a dispute.

The downside to one-step proofs is they are not applicable in every case. It must be possible to fall back to executing the verification on-chain. Some types of computation are simply too large or require too much data for this to be feasible. What can be done in this case?

2-party bisection dispute games

The Arbitrum paper first popularized bisection dispute games in the blockchain space. To understand a bisection game you first need to understand emulators and the computation trace.

A program can be expressed as a sequence of instructions to be executed on a processor. This processor also has registers and/or memory to store state.

Executing the program and recording the registers+memory at after each instruction is called an execution trace. This trace may be much longer than the original program as instructions may branch or loop. For a program that terminates this execution trace can be considered a very verbose proof of the final state. A program that knows how to execute all the possible instructions and update the state accordingly (an emulator) can verify this proof.

A program trace for any non-trivial program is absurdly huge, but it can be compressed using two clever tricks.

The first is that the full CPU+memory state need not be included at each step. Instead a commitment of the state (e.g. Merkle root) can be recorded instead.

The second applies when there are two conflicting parties in dispute about the result of executing a program. A two-party protocol can be used to reduce the trace down to the first instruction that the parties disagree upon the result of. A third trusted arbitrator (usually a blockchain runtime) can then execute only a single instruction in the trace resolve a dispute over an entire program.

This last trick of framing the dispute as being between two parties allows proving faults in programs in a number of steps that is logarithmic in the length of the trace, followed by the execution of a single instruction. This is incredibly powerful and has been developed by Arbitrum and the Optimism Bedrock prototype for proving rollup state transitions, along with Cartesi for proving arbitrary computations on-chain.

Problems

The problem with this approach as described is that once two players are engaged in a dispute they must both remain live in order to resolve it. There is also no way to ensure that both participants are honest so it must be possible for multiple disputes to be open on a single claim at once for the honest minority assumption to hold.

A malicious challenger may open many dispute games on a single assertion and although they will lose their bond if they have more available funds than the defender they can eventually prevent them from being able to participate. I have written about this issue in another article.

The usual solution to this is to limit who can participate in dispute games. This is the approach used by Arbitrum and the Optimism Bedrock prototype. This compromise places these dispute game systems in an in-between trusted federation and fully permissionless dispute games.

Permissionless Bisection Dispute Games

So can bisection style dispute games be made fully permissionless? A recent paper by Nehab and Teixeira proposes a modification to the dispute games where the participants must submit a vector commitment to the entire execution trace before the game begins. Once this has been done the game becomes permissionless as anyone can submit a step along with its inclusion proof.

This is an excellent solution however it has a major drawback. Execution traces are incredibly large, commonly several trillion instructions. Producing a merkle tree and associated proofs for such a long vector is prohibitive in most cases. The authors solution to this is to split the trace into stages and run

More recently Optimism has proposed another design which structures dispute game is structured as an n-degree tree rather than a binary tree. This allows other participants to fork off an existing dispute game when they believe participants to be fraudulent.

Bond is added incrementally at each step of the tree allowing permissionless participation. Once a game has concluded the player who asserted correctly against any branch that has been proven false can collect the bond on each of those nodes in the tree.

This design gives the best of both worlds allowing permissionless participation without needing to compute trace commitments. This is at the cost of increased complexity in implementation.

Conclusion

Dispute games are conceptually simple but designing them to be permissionless is much more challenging. Despite dispute games being proposed for use in blockchain systems more than 5 years ago, and claiming to be permissionless, there has not been a single permissionless dispute game used in production.

Optimism has made excellent progress in the last year design dispute games that can be safe and permissionless and these will hopefully be deployed in production in the near future.

Jagrut Kosti

Given a blockchain protocol, where the transactions are included in a block by a miner or validators, there are a few incentive compatibility requirements that do not get much attention. These are:

  • Dominant Strategy Incentive Compatible (DSIC)
  • Incentive Compatibility for Myopic Miner (MMIC)
  • Off-Chain Agreements (OCA)

For a sustainable decentralized ecosystem, it is important to align the incentives so that system isn't biased towards a particular actor at scale. While this article discusses base layer incentives, the requirements can be extended in application layer as well. In most protocols, there are primarily 2 different actors: the ones that maintain the protocol and the ones that use it. They can be further divided based on the architecture, but the collective flow of revenue is between these actors. Not much research efforts have gone into formalizing the requirements for a susitainable incentive structure.

We first formalize transaction fee mechanism (TFM), then define the incentive compatibility requirements and conclude by stating what does a good TFM should consist of. The description and the conclusion are based on 2 papers mentioned in the reference section.

Transaction Fee Mechanism

A transaction tt has publicly visible and immutable size sts_t. The creator of the transaction is responsible for specifying a bid btb_t per unit size, publicly indicating to pay upto stbts_t \cdot b_t. A block is an ordered sequence of transactions and each block has a cap on the number of included transactions called maximum block size, CC (capacity). A Mempool is a list of outstanding transaction waiting to be included in the block. MM is miner's current mempool.

Valuation vtv_t is the maximum per size value the transaction inclusion in a block for a creator is (vtv_t is private to the creator) whereas btb_t is what they actually pay. Valuation can also be thought of as maximum price for the utility of a transaction to be included in the block.

H=B1,B2,...,Bk1H = B_1, B_2, ..., B_{k-1} (History) is the sequence of blocks in the current longest chain. B1B_1 being the genesis block and Bk1B_{k-1} the most recent block.

Allocation Rule

Allocation rule is a vector-valued function xx from the on-chain history HH and mempool MM to a 0-1 value xt(H,M)x_t(H,M) for each pending transaction tMt \in M.

A value of 1 for xt(H,M)x_t(H,M) indicates transaction tt’s inclusion in the current block BkB_k; a value of 0 indicates its exclusion.

Bk=x(H,M)B_k = x(H,M) is the set of transactions for which xt(H,M)=1x_t(H,M) = 1.

A feasible allocation rule is a set of transactions TT such that:

tMstxt(H,M)C\sum_{t \in M} s_t \cdot x_t(H, M) \le C

While a allocation rule is defined with some criteria in mind, it is important to note that the miner (block proposer) has the ultimate say over the block it creates.

Payment Rule

Payment rule is a function pp from the creator of the included transactions to the miner for each included transaction tBkt \in B_k.

pt(H,Bk)p_t(H, B_k) is a non-negative number for each unit size.

Burning Rule

Burning rule is a function qq, indicating the amount of money burned by the creator of transaction tBkt \in B_k.

qt(H,Bk)q_t(H, B_k) is a non-negative number for each unit size.

Burning rule can also be thought of as a refund to the holders of the currency via deflation. An alternative is to redirect a block's revenue to entities other than block's miner e.g. foundation, future miners, etc.

A TFM is a triple (x,p,q)(x, p, q) where x is a feasible allocation rule, p is payment rule and q is burning rule.

Incentive Compatibility

Dominant Strategy Incentive Compatible

User's Utility Function

For a TFM (x,p,q)(x, p, q), on-chain history HH and mempool MM, the utility of the originator of the transaction tMt \notin M with valuation vtv_t and bid btb_t is:

ut(bt):=(vtpt(H,Bk)qt(H,Bk))stu_t(b_t) := ( v_t - p_t(H, B_k) - q_t(H, B_k)) \cdot s_t

if tt is included in Bk=x(H,M(bt))B_k = x(H, M(b_t)), 0 otherwise.

We assume that a transaction creator bids to maximize the utility function above. A TFM is then DSIC if, assuming that the miner carries out the intended allocation rule, every user (no matter what their value) has a dominant strategy — a bid that always maximizes the user’s utility, no matter what the bids of the other users.

E.g. First Price Auctions (FPA) are not DSIC. An FPA allocation rule (historically Bitcoin, Ethereum) is to include a feasible subset of outstanding transactions that maximizes the sum of the size-weighted bids. A winner then pays the bid if the transaction is included and nothing is burned and all revenue goes to the miner.

Incentive Compatibility for Myopic Miner

Myopic Miner Utility Function

For a TFM (x,p,q)(x, p, q), on-chain history HH, mempool MM, fake transactions FF, and choice BkMFB_k \subseteq M \cup F of included transactions (real and fake), the utility of a myopic miner (a miner concerned only with the current block) is:

u(F,Bk):=tBkMpt(H,Bk)sttBkFqt(H,Bk)stμtBkstu(F, B_k) := \sum_{t \in B_k \cap M} p_t(H, B_k) \cdot s_t - \sum_{t \in B_k \cap F} q_t(H, B_k) \cdot s_t - \mu \sum_{t \in B_k} s_t

where μ\mu is the marginal cost per unit size for a miner. Its the same for all miners and a common knowledge.

  • The first term is the miner's revenue
  • The second term is the fee burn for miner's fake transactions. It does not include real transactions as that is paid by transaction's creators.
  • The third term is the total marginal cost

A TFM is then MMIC, if a myopic miner maximizes its utility by creating no fake transactions i.e. settingF=F = \emptyset

E.g. FPAs are MMIC as the miner's best choice is to include all max bid transactions and including fake transactions does not increase any utility. Second-price type auctions, on the other hand, are not MMIC since a miner can include fake transactions to boost the max bid.

Off-chain Agreements

In OCA, each creator of transaction tt agrees to submit tt with on-chain bid btb_t while transferring τst\tau \cdot s_t to the miner off-chain; the miner in turn agrees to mine a block comprising the agreed upon transactions of TT.

A TFM is OCA-proof if, for every on-chain history HH, there exists an individually rational bidding strategy σH\sigma_H such that, for every possible set TT of outstanding transactions and valuations vv, there is no OCA under which the user's utility of every transaction and the miner's utility is strictly higher than in the outcome Bk=x(H,M(σH(v)))B_k = x(H,M(\sigma_H(v))) with on-chain bids σH(v)\sigma_H(v) and no off-chain transfers.

E.g. FPAs are OCA-proof whereas the TFMs where any amount of revenue is burned are not.

Conclusion

The article gives a brief overview of the formalization in TFM design. For a sustainable protocol, it is important that the TFM is DSIC, MMIC and OCA-proof. Based on this, the obvious question is: Is there any TFM that satisfies all 3 properties?

[1] conjectures that no non-trivial TFM can satisfy all 3 properties and [2] proves this conjecture. For designing a TFM, we can generally say that it would be interesting to characterize a mechanism that satisfy various subsets and relaxations of these 3 properties. The goal should be to minimize the effects of a property that is not satisfied or partially satisfied.

I highly recommend interested users to read [1] & [2].

References

  1. Roughgarden, Tim. "Transaction fee mechanism design." ACM SIGecom Exchanges 19.1 (2021): 52-55. (Paper)
  2. Chung, Hao, and Elaine Shi. "Foundations of transaction fee mechanism design." Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA). Society for Industrial and Applied Mathematics, 2023. (Paper)

Daniel Choi

In an evolving blockchain ecosystem, the number of bad actors and exploiters grow, one could experience some fear or paranoia around interacting with strangers in the terrifying internet.

In this short article, we will explore a few ways to equip yourself with the necessary tools to fend off any potential attacks and keep your keys/system secure. This handbook will encompass a few protective measures, including:

Our exploration will dive into the Ethereum blockchain system and associated tools and technologies.

Key Storage

One's key(s) is an essential component to interacting with the blockchain and an incredibly valuable piece of data that should be well guarded. Safeguarding these keys should be prioritized and those keys should always be used with caution.

A simple way to start is choosing how one's private keys are generated. Opt for offline generation to ensure that the process remains concealed from prying eyes and potential data breaches. An offline private key generator, such as this one, guarantees the confidentiality of your private key. Additionally, consider exploring the SLIP39 protocol — a mnemonic (private key) generator leveraging Shamir Secret Sharing to divide the seed phrase into multiple shares.

These private keys, or mnemonic - a list of 12 or 24 words, must be safeguarded and should never be lost or stolen. You may wish to physically write down your private key/mnemonic on a piece of paper, which is called a "paper wallet". It is imperative to never lose this piece of paper and take precautionary measures, such as buying a safe to store it in. The best way to protect your keys is to generate them offline on a dedicated computer/device. This computer/device will never connect to the internet and be "air gapped", which will further shield your keys.

Alternatively, entrusting a reputable wallet application with your key management is another option. Beware that certain wallets are "custodial," implying that they possess ownership of your keys, while merely providing access to it via their platform. Opting for a "non-custodial" wallet provides greater autonomy, granting you full control over your assets by exporting one's private key. As the saying goes, "not your private key, not your coins".

Among various "non-custodial" wallets, ZenGo, Torus, Argent and AlphaWallet stand out. Particulary, the most technologically inclined for protection is ZenGo. This is due to the its MPC (multi-party computation) as well as the "social recovery" feature.

There are also different types of wallets such as hot wallets and cold wallets. Any keys used to continually connect to the internet and make frequent transactions would be a "hot wallet". "Cold wallets" would refer to stored keys that are not connected to the internet and are rarely used, such a Ledger and Trezor. For highest security, both should be used. The cold wallet should hold longer term assets that do not need to be transferred frequently and the hot wallet should have minimum funds (perhaps, up to the amount one would be willing to lose).

To note, there is actually a vulnerability for the Terzor T. Additional information can be found here. Due to this, it is recommended to use another model offered by Trezor, or taking extra precautions in physically securing the device.

Wallet Security

So now that you have a wallet, does that mean it's secure forever? Most likely not... However, there are a few ways to make sure your wallets are protected. Let us explore some important strategies for proper and ongoing maintenance.

  1. Asset Monitoring: Employ monitoring tools like such as tenderly and OZ Defender to track wallet addresses and associated assets. These will will help in checking wallet address(es) and you may optionally configure notifications for any activity on those address(es). Alternatively, a blockchain explorer can be used to view your address.
  2. Approval Management: When interacting with tokens, there is a process by which you allow "approval" for another address to transfer the corresponding token. This is a necessary step when minting/burning/transferring tokens. However, there are plenty of times a contract will request "indefinite approval" for any amount of tokens. These are especially dangerous if the contract has malicious intent. Utilize tools such as revoke, unrekt, orzapper to manage "approvals" for all of your addresses.
  3. Key Rotations: Regularly rotate your keys to enhance security. While there is no particularly fixed schedule for key rotations, the more frequently you do it, the more likely your keys will be secure - granted they are generated in a secure manner. Key rotation involves generating a new key and then transferring your previous assets to the new key. Though execution costs may be a deterring factor in the frequency of key rotations, it is still an important practice to exercise.
  4. Adding Canary Tokens/Accounts: For the ultra-cautious, there is a great tool to provide insight into potential breaches. Canary Tokens generates a dummy file and it can be given an obvious name, like "privatekeys.txt". The file will then keep track of who accessed the file, when it has been accessed, and from where. This is a great way to ascertain if your computer has been compromised and if there is anyone lurking that could potentially cause you harm. To get started, you can follow this quick guide.

Ephemeral Execution Environment

A great way to ensure that all web3 interactions are safe would be to setup a brand new environment. This will account for any potential malware or undetected bad-actors that could be present in your current system.

One way this could be done is by creating a VM (virtual machine) - ideally a dedicated machine. This will be used solely for any blockchain activity. This will guarantee a clean working environment and can foster some peace of mind. Another benefit to this type of setup is that if anything anomalous occurs, you can always tear down the VM and build a new one again!

A very useful article for getting quickly set up and get started is available here. Bear in mind, this article does not go through security in detail (e.g. networking security) and should be considered when following the article.

Making Smart Decisions

People are prone to making mistakes, but rarely do they make a royal one. When it comes to protecting your assets, however much, there is nothing more important. Minimizing errors is crucial, as actions within the blockchain are immutable.

A very common problem for blockchain users are phishing scams. Some are relatively easy to catch, and here are a couple tips on how to avoid them. Firstly, never click any links (especially if they're phish-y) in emails or strange websites - supplemental information can be found here. Secondly, do not send tokens to any address(es) that are unkown or unverified. Along these lines, be aware of who you are interacting with online (whether username, email, etc.). If an email or DM was sent by a a funny looking email/username, one could deduce that it is most likely a scam.

Another tip is to go deeper into the code. If able, one should try to examine the protocol's codebase and make sure that it is public and verified. This will help with making informed decisions when interacting with a smart contract. Although this may be a far more difficult security practice, there are other simpler ways such as ensuring a protocol has received an audit.

Conclusion

While this article offers a succinct overview, I hope it goes through some useful tips to be more security-centric. As a now-informed reader, delve into further research and tread cautiously in the dark forest.

If this article was helpful, please let us know and make a request! There is a number of things that could be added or further expounded on and could possibly be in another article. Thanks for the read and stay (Chain)Safe.

Jagrut Kosti

Ever had to go out with your colleagues for lunch and experienced how hard it can be to come to a conclusion on where to eat? Individual decision making is hard. Collective decision making is much harder (an understatement)! But it is extremely important, especially in decentralized autonomous organizations (DAOs). To be able to come to a conclusion using code as mediator is challenging. But before we dive into the complexities of governance whether on-chain or off-chain, let's understand some of the fundamentals from social choice theory and the decades of work that has been put into understanding decision making in a democratic structure.

Most of the DAOs that have implemented governance mechanism, essentially boils down their voting method into single choice "yes", "no" or "abstain" option. While this seems intuitively simple and completely eliminates the complex intricacies involves in ranked choice voting, it's not the best system for all scenarios. And most of the projects do not seem to highlight this:

Let's say for platform X (sarcasm intended), a decision has to be made about which logo, out of logoA, logoB and logoC, should be adopted. One can argue that instead of creating one proposal and tallying using ranked choice preference, we can split the proposal into 3 and carry-on with the usual voting methods adopted by DAOs:

  • Should we go with logoA? Options: "yes", "no", "abstain"
  • Should we go with logoB? Options: "yes", "no", "abstain"
  • Should we go with logoC? Options: "yes", "no", "abstain"

This opens up a can of worms! What happens if logoA and logoB both have "yes" as the winner? Should we then create another proposal to resolve the tie? Are the same voters going to vote on that? What if some significant portion of the old voters does not show up? What if new voters show up? Would this not increase voter fatigue?....

While most DAOs try to avoid such proposals, it can still happen depending on the topic of discussion. There is a reason why ranked choice preference tallying is avoided but that does not mean it cannot be made practical. In this article, we will look into a few of the well-known theorems in social choice theory and to keep in mind when designing any governance mechanism.

May's Theorem

The reason that most DAOs use single choice simple majority voting method is because of the May's theorem:

For two alternative voting, May's theorem states that simple majority is the only anonymous, neutral and positively responsive social choice function.

Anonymous: There is no distinction between the votes and each voter is identical.

Neutral: Reversing the choice of each voter, reverses the group outcome.

Positively Responsive: If some voters change their preference in favor of one option, while others remain the same, the proposals outcome does not change in opposite direction. If the previous outcome was a tie, the tie is broken in the direction of the change.

The theorem only applies if there are two options. In most DAO's ballot (set of options in proposals), "abstain" vote is not counted and hence the theorem applies.

Arrow's Impossibility Theorem

Arrow's impossibility theorem applies only for ranked choice voting. A voting rule is a method of choosing winner from a set of options (ballot) on the basis of voter's ranking of those options. Before jumping into the theorem, lets examine a couple of different voting rules.

In plurality rule, the winning option is the option which was ranked first the most than any other option. E.g. for 3 options XX, YY & ZZ, if 40% voters liked XX best i.e. ranked it first, 35% liked YY best and 25% liked ZZ best, then XX wins, even though it is short of an over-all majority (greater than 50%).

40%35%25%
XYZ

In majority rule, the winning option is the option that is preferred by a majority to each other option. E.g. for 3 options XX, YY & ZZ, if 40% voters rank X>Y>ZX>Y>Z, 35% rank Y>Z>XY>Z>X and 25% rank Z>Y>XZ>Y>X, the winner is YY because majority of voters (35% + 25% = 60%) prefer YY to XX and a majority (40% + 35% = 70%) prefer YY to ZZ.

40%35%25%
XYZ
YZY
ZXX

Note that plurality and majority rule leads to different outcome. This prompts the question: Which outcome is "right"? Or, which one is better to use? We can then ask a general question: Among all possible voting rules, which is the best?

Arrow proposed that we should first identify what we want out of the voting rule i.e. what properties should be satisfied. The best voting rule will then be the one that satisfy all of them. Those properties are:

1. Decisive/Unrestricted domain

All preferences of all voters should be accounted for and there should always be a winner and there shouldn't be more than one winner.

2. Pareto Principle

If all voters rank XX above YY and XX is on the ballot, YY should not be the outcome.

3. Non-dictatorship

No single voter's choice should decide the outcome.

4. Independence of irrelevant alternatives

Given a voting rule and voter's ranking, if XX is the winner, then if we remove YY from the ballot as it was not the winning choice (irrelevant), XX should still win i.e. independent from an irrelevant choice. To give an example, in the plurality rule example above, if option ZZ was removed and all those who chose ZZ as their first choice now chooses YY, making YY the winning choice with 60%. Politically speaking, ZZ is a spoiler. Even though ZZ was not going to win in either cases, it ended up determining the outcome. This happens in democratic elections several times. This property serves to rule out spoilers.

Plurality rule is vulnerable to spoilers and hence violates the independence property. Majority rule, satisfies the independence property i.e. if XX beats each of the other choices, it continues to do so if one of the choices is dropped. But majority rule does not satisfy the decisiveness property i.e it doesn't always produce a winner. E.g. in the following table of ranked choices, YY beats ZZ by a majority (68% to 32%), XX beats YY by a majority (67% to 33%) and ZZ beats XX by a majority (65% to 35%) - so there is no option which beats the other two. This is called Condorcet's Paradox.

35%33%32%
XYZ
YZX
ZXY

Arrow tried to find a voting rule that satisfies all the properties but eventually led to the conclusion that there is no voting rule that satisfies all 4 properties!

The name of the theorem is itself a source of pessimism: if something is "impossible", its pretty hard to accomplish. This theorem prompts the question: Given that no voting rule satisfies all properties, which rule satisfies them most often? One plausible answer is that in majority voting, if one particular class of ranking (e.g. Z>Y>XZ>Y>X) is removed with high probability of it not occurring, then majority rule will always have an outcome. In this case, majority rule does not violate the decisive property.

There is another theorem called the domination theorem. It states that for any voting rule that differs from majority rule, if it works well for a particular class of rankings, then majority rule must also work well for that class. Furthermore, there must be some other class of rankings for which majority rule works well and the voting method we started with does not. Whenever another voting rule works well, majority rule must work well too, and there will be cases where majority rule works well and the other voting rule does not.

This applies only if there is a possibility of identifying a class of ranking that is highly unlikely to occur. In case of DAOs, the question arises who is responsible for identifying and eliminating such a class of ranking for each proposal. Simply eliminating the least voted class of ranking results into utter neglect of minority.

Gibbard–Satterthwaite Theorem

Gibbard-Satterthwaite's theorem is applicable on ranked choice voting that chooses a single winner. It follows from Arrow's impossibility theorem. For every voting rule, one of the 3 things must hold:

  1. There is a dictatorship i.e. one voter's choice determines the outcome OR
  2. There are only two choices (in ballot) and hence only two possible outcome OR
  3. Tactical voting is possible i.e. there exist some strategy where a voter's ranked choice does not show their sincere opinion but gives the outcome they want.

Borda count: For the ranked choice ballot of each voter with nn options, assign n1n-1 points to the top option, n2n-2 to the second option, ... and 00 to the last option. Option with the most point is the winner.

To demonstrate the theorem, consider 3 voters AA, BB & CC and 4 options WW, XX, YY & ZZ and their ranked preference is as follows:

VoterChoice 1Choice 2Choice 3Choice 4
AliceWXYZ
BobYXZW
CarolYXZW

Based on Borda's count (WW: 3, XX: 6, YY: 7, ZZ: 2), YY is the winner. But if Alice changes its ballot as follows:

VoterChoice 1Choice 2Choice 3Choice 4
AliceXWZY
BobYXZW
CarolYXZW

From Borda count (WW: 2, XX: 7, YY: 6, ZZ: 3), XX is the winner and Alice's preference of XX over YY is still maintained. We can say that there exists a strategy where Borda count is manipulable.

Conclusion

Before designing any governance mechanism for DAOs, it is extremely important to understand that on-chain voting is a great tool but it does not solve the inherent problems in social choice theory. We also want to experiment with new systems but first, base our work on decades of research that have already been proven to work or not work. This article gives an overview of why ranked choice voting is complex to implement and even though most DAOs are currently opting for single choice voting, it is also prone to manipulation.

Most DAOs allow the voters to weigh the preference by either using more tokens or time-locking tokens (conviction voting). This is limited to putting the weight towards one option in single choice voting. Ranked choice voting is complex to begin with and introducing weights can potentially add more complexities and result in unforeseen outcomes. As shown in Gibbard-Satterthwaite theorem, Borda count is manipulable. And adding weights will open up more possibilites to game the system. But nonetheless, a great domain to research and experiment!

Timofey Yaluhin

Elliptic curves form a basic building block for all kinds of complex cryptographic machinery: digital signature schemes, key exchange mechanisms, zero knowledge proofs, multi-party computation, etc.

While all curves have a similar mathematical structure they can have wildly different properties in terms of security, performance, and supported arithmetic. With the increasing adoption of zero-knowledge cryptography, finding and exploiting such differences becomes increasingly prominent. Thus, the search for new elliptic curves is an active area of research in cryptography. The goal is to find curves that offer higher security, more efficient arithmetic operations, and are widening the scope of cryptographic applications.

This post goes over different methods for constructing elliptic curves, some potential applications, and some practical consideration for application-specific curve development. The reader is assumed to have a basic understanding of elliptic curve cryptography. If not consider checking elliptic curves cheat-sheet first.

Why search for new curves?

As with many things in modern cryptography, it is the rise of blockchains that sparked so many innovations related to elliptic curves as well as zero-knowledge cryptography, which subsequently pushed even more researchers to actively search for new elliptic curves. The applications described here would therefore be closely related to Web3 and ZK domains.

Platform-constraint arithmetic circuits

In ZK systems computation is expressed as arithmetic circuits that in turn perform their operations on a finite field Fr\mathbb{F}_r. This field corresponds to a scalar field of an elliptic curve whose base field is then used by the verifier in the proof verification algorithm. A verifier can be a person whom you need to prove some statement, but more commonly the verifier would be a smart contract. Blockchains that host these contracts usually support a fairly limited number of elliptic curves. Ethereum for example only has a precompiled contract for BN256. This can become a significant limitation when an in-circuit computation itself contains elliptic curve operations. Think of signature verification; checking that encrypted message has some properties, or even verifying another SNARK aka recursive proofs composition, which we discuss later.

An example to note is related to the StarkNet platform and Cairo language. The core technology powering these two is STARKs (no surprise). What's interesting is that, unlike other proof systems, STARKs only require a small prime field to operate, so researchers at Starkware invented a special STARK curve that has a very small prime field - 64 bytes. As a result, implementing cryptographic schemes over the standard curves (e.g. ECDSA over Secp256k1) would be wasteful. Unsatisfied with the status quo Mikerah from HashCloak resolved to find a new Cairo-friendly curve named Starkjub.

The solution to this kind of problem is quite simple, in fact, it's the oldest trick in the book. One can pick another curve EE' whose base field matches the scalar field of a curve EE used by the verifier. This offers a great alternative to simulating foreign fields with available (unfriendly) one, referred to as non-native arithmetic. Commonly, curves found with such intent have a twisted Edwards form. They are defined over a particular form of equation, ax2+y2=1+dx2y2ax^2 + y^2 = 1 + dx^2\cdot y^2, and are known for their efficient point addition. Many such curves were found in recent years. Some of them are given cute names like JubJub (Ed-on-BLS12-381) and funny pictures to be more memorable.

JubJub (left) and Bandersnatch (right)

Application-constraint arithmetic circuits

For some applications the aforementioned curve substitution is impossible. Think of a cross-chain bridge where a smart contract on the destination chain needs to verify signatures of the source blockchain. Another example is identity-based encryption (IBE) like the one used in the tlock scheme to achieve practical time-lock encryption facilitated by Drand threshold network. Recently I've set to make such encryption verifiable and quickly realized that performing arithmetic on BLS12-381, which Drand operates on, is very inefficient with existing tools. Search for a better alternative brought me into this rabbit hole.

The discovered solution is the opposite of the one we've just discussed. Here a curve must be picked, whose scalar field matches the projective curve's base field. Depending on the proving system, there can be another important requirement: systems that rely on KZG commitment scheme, such as Groth'16, PLONK require pairing-friendly curves to operate. The reason is that KZG proof verification algorithm requires elliptic curve point multiplication which is not allowed in traditional ECC. However, when both points are from pairing-friendly curve G1\mathbb{G}_1 and G2\mathbb{G}_2 and there exists a bilinear map between their respective groups of points, then it's possible to map these points into a target group GT\mathbb{G}_T point, which acts as a product G1×G2GT\mathbb{G}_1 \times \mathbb{G}_2 \rightarrow \mathbb{G}_T.

The known methods for finding pairing-friendly curves whose scalar field embeds other curve's base field are Cocks–Pinch (CP) and Dupont-Enge-Morain (DEM), and we will be taking a closer look at them later in the article. In the previously mentioned time-lock IBE project, the Cocks-Pinch method was used to find a curve that embeds BLS12-381 scalar field, which I named YT6-776 curve aka "Yeti".

info

When two curves E1E_1 and E2E_2 satisfy r1=#E2(F2)r_1 = \#E_2(\mathbb{F}_2) where rr is size of the largest prime subgroup (scalar field), then they are referred to as 2-chain. If to keep adding embedding curves satisfying the given condition, the resulting set of curves E1,E2,...,EnE_1, E_2, ..., E_n would be an n\bm{n}-chain.

Recursion substrate for ZKPs

Recursive ZKPs is a short way of saying that one proof attests to the validity of another one, i.e. "proof of a proof". For this, an arithmetic circuit of the outer proof must implement the verification algorithm of an inner proof. If both outer and inner proofs are based on the same proving system we say it's a recursive proof composition, otherwise just a proof composition. When proof verifies a proof just once or a bounded number of times then it's one-layer or NN-layer recursion respectively.

The reason why recursion is challenging relates closely to the aforementioned problems. Instead of repeating myself, I will quote an explanation for the "Zexe: Enabling Decentralized Private Computation" paper, which I recommend for those who'd like to go beyond the concepts mentioned in this article.

The way to approach recursive proof composition would vary depending on the desired type. For bounded recursion, the nn-chain of curves would be sufficient to compose proofs up to nn levels, otherwise the cycle of curves must be used. A pair of curves that satisfy r1=#E2(F2)r2=#E1(F1)r_1 = \#E_2(\mathbb{F}_2) \land r_2 = \#E_1(\mathbb{F}_1) forms a 2-cycle. During proofs generation, one would have to alternate the instantiation of the proofs with two curves of the cycle so that their fields "match up". Only prime-order curves can form cycles and it's generally much harder to find cycles of pairing-friendly curves. Ideally, both curves in the cycle should have the same embedding degree kk and same 2-adicity, like Pasta curves and unlike Tweedle curves.

The only known pairing-friendly cycles are formed by alternating MNT (Miyaji-Nakabayashi-Takano) curves of embedding degrees 4 and 6 using [KT07] method. Note, that due to low embedding degrees, secure curves in the MNT family must be constructed over very large (1024-bit) fields, which significantly downgrades the performances.

Methods for constructing elliptic curves

Complex multiplication method

Complex multiplication (CM) method is used to generate an equation for a curve with some given properties such as order nn, embedding degree kk, trace tt, and fundamental discriminant DD.

To construct an elliptic curve over Fq\mathbb{F}_q with nn points:

  • Start by choosing a prime power qq and integers n,D,k,tn, D, k, t.
  • Find an integer solution (x,y)(x, y) for the CM equation of form Dy2=4qt2=4hr(t2)2Dy^2 = 4q - t^2 = 4hr − (t − 2)^2.

To construct family of curves Fq(x)\mathbb{F}_q(x):

  • Parametrise t,r,qt,r, q as polynomials: t(x),n(x),q(x)t(x),n(x), q(x).
  • Find all solutions (x,y)(x, y) to the CM equation in the polynomial form Dy2=4q(x)t(x)2=4h(x)r(x)(t(x)2)2Dy^2 = 4q(x) - t(x)^2 = 4h(x)r(x) - (t(x) - 2)^2.

The output is coefficients AA and BB for the elliptic curve equation of the Weierstrass form (y2=x3+Ax+By^2 = x^3 + Ax + B).

Cocks-Pinch method

Cocks-Pinch (CP) method is used to construct pairing-friendly elliptic curves with arbitrarily chosen embedding degree; curves constructed using this method have ρ\rho-value of approximately 2.

To use CP method:

  • Choose a positive integer kk and a integer rr congruent to 1 modulo kk, and fundamental discriminant DD.
  • Find trace tt and prime qq such that the CM equation is satisfied.

The output is a prime integer qq such that there exist an elliptic curve EE over Fq\mathbb{F}_q with an order-rr subgroup and embedding degree kk. If fundamental discriminant D1012D \le 10^{12} then EE can be constructed using the CM method.

AdvantagesDisadvantages
order rr can be chosen in advancecannot construct curves of prime order
allows arbitrary embedding degree kkρ2\rho \approx 2 (security cost is about twice the base field size)
many curves possible; easy to specify bit sizes

When to use CP method:

  • For embedding known curve's base field into new curve's scalar field.
  • When minimising ρ\rho is not a priority.
tip

The addition in double-and-add iteration (Miller's loop) of pairing operation will be executed more quickly when rr has low Hamming weight (number of non-zero bits). Using CP method that allows for rr to be chosen arbitrarily, this optimization can be exploited.

Dupont-Enge-Morain method

Dupont-Enge-Morain (DEM) method is similar to CP in that it produces elliptic curves with an arbitrary embedding degree, but in doing so it computes trace tt and subgroup order rr simultaneously using resultants.

To use DEM method:

  • Choose embedding degree kk and fundamental discriminant DD.
  • Find tt and rr simultaneously using resultants, then find cofactor hh such that CM equation is satisfied.

The output is prime integers qq and rr such that there exist an elliptic curve EE over Fq\mathbb{F}_q with an order-rr subgroup and embedding degree kk. If a=Dy2a = Dy^2 with D1012D \le 10^{12} then EE can be constructed using the CM method.

AdvantagesDisadvantages
effective for computing curves with arbitrary embedding degree kkmore difficult to specify order rr precisely as it found as a value of a certain polynomial

When to use DEM method:

  • When rr is already defined by the polynomial in a higher-level application, otherwise use CP method instead.
  • When t,r,qt, r, q are parameterised as polynomials, then the ρ<2\rho < 2 can be achieved in resulting cyclotomic curve families (Fq\mathbb{F}_q is a cyclotomic field, rr is a cyclotomic polynomial).

Miyaji-Nakabayashi-Takano method

Miyaji-Nakabayashi-Takano (MNT) method is used to sample a sparse family of elliptic curves with an arbitrary but limited embedding degree.

To use MNT method:

  • Choose embedding degree kk and fundamental discriminant DD.
  • Parametrise t(x)t(x), h(x)h(x) (set h(x)=1h(x) = 1 if prime-order curve is needed).
  • Compute r(x)r(x) and q(x)q(x) such that r(x)=q(x)+1t(x)r(x) = q(x) + 1 − t(x) and q(x)=h(x)r(x)+t(x)1q(x)=h(x)r(x)+t(x)−1.
  • Find all solutions (x,y)(x, y) such that CM equation is satisfied.

The output is a polynomial q(x)q(x) and r(x)r(x) such that there exist a set of elliptic curve E(x)E(x) over Fq(x)\mathbb{F}_q(x) with h(x)r(x)h(x) \cdot r(x) points and embedding degree k=3,4,k = 3, 4, or 66. If q(x),r(x)q(x), r(x) are both primes, then curves E(x)E(x) can be constructed via CM method.

AdvantagesDisadvantages
good for finding prime-order curvesembedding degree is limited* to k=3k = 3, 44, 66
128-bit security requires large (1024-bit) fields

* extension methods allow k=10k = 10, 1212

When to use:

  • When a prime-order curve is needed.
  • When looking for cycles of curves.

Twisted Edwards curves over the known field

There's no single method for finding curves with a given base field size qq, but the general procedure is following:

  • Fix the curve by choosing coefficient dd.
  • Try different aa, until you find a curve over Fq\mathbb{F}_q with satisfiable subgroup size rr.
  • During the search it’s possible to constraint other parameters, e.g. cofactor.

An alternative well-generalized method is described "Twisted Edwards Elliptic Curves for Zero-Knowledge Circuits" and was used to find Baby-JubJub over BN256 scalar field. Authors first find a Montgomery curve of desired parameters and then transform it to a twisted Edwards curve, which is possible because both forms are birationally equivalent. See example code.

More about the Twist

Twist is a method of finding a twisted curve EE' over Fq\mathbb{F}_q which is isomorphic (equivalent) to a known curve EE over Fqd\mathbb{F}_{q^d} such that it's possible to use the results of cheaper arithmetic over smaller Fq\mathbb{F}_q for computation on points of EE that is defined over a larger field Fqd\mathbb{F}_{q^d}.

The minimal integer dd for which EE and EE' are isomorphic over Fqd\mathbb{F}_{q^d} is called the degree of the twist. There exist curves with: quadratic twist d=2d = 2, cubic twist d=3d = 3, and sextic twist d=6d = 6.

To find (quadratic) twist:

  • Suppose you have a curve EE over Fp\mathbb{F}_{p} with equation y2=x3+ax3+bx+cy^2 = x^3 + ax^3 + bx + c.
  • Pick some dd that is not a square mod pp, i.e. there is no xx such that x2cx^2 - c is divisible by pp.
  • Define the twisted curve EE' with equation dy2=x3+ax3+bx+cdy^2 = x^3 + ax^3 + bx + c.

To find higher-degree twists it's possible to stack multiple low-degree twists in a "tower" structure. For example, a sextic twist can be constructed by stacking two cubic twists: Fq3Fq32Fq6\mathbb{F}^3_{q} \rightarrow \mathbb{F}^2_{q^3} \rightarrow \mathbb{F}_{q^6}. Some operations such as pairing can be computed more quickly when performed over the extension field in tower form.

Advantages:

  • Increases security of new curve while keeping the performance of origin curve, e.g. a twisted curve defined over Fqd\mathbb{F}_{q^d} may have a 214-bit security, but group operations could be computed in Fq\mathbb{F}_q instead of Fqd\mathbb{F}_{q^d}.
  • Compression: in a twisted curve with embedding degree kk and a degree-dd twist the output of pairing can be given as an element of Fqk/d\mathbb{F}_{q^{k/d}} instead of Fqk\mathbb{F}_{q^k}, sparing log2dlog_2 d bits.

Methods for curve optimization

Ristretto method

Ristretto is a technique that can be applied to Edwards curves with cofactor h=4h = 4 or 88 to map their points of infinite order to points of prime order effectively creating prime order groups.

To use Ristretto:

  • Define a new type for Ristretto points which contains the representative Edwards point.
  • Add an equality function to compare both representations of the same point.
  • Add encoding and decoding functions to represent a point in and from their corresponding bitstrings.
  • Add a map from bitstrings to Ristretto points suitable for hash-to-point operations.

Advantages:

  • Prevents certain attacks e.g. small subgroup attacks.
  • Compressed points are more efficient to transmit and store.
  • Preserves points' specific properties, which can be important for security.
  • Can reduce cofactor hh that can be exploited by attackers.

Gallant-Lambert-Vanstone method

Gallant-Lambert-Vanstone (GLV) method is a technique that can be applied to curves whose endomorphism ψ\psi can be efficiently computed to significantly accelerate scalar multiplication.

To use GLV method:

  • During curve generation fundamental discriminant must be restricted to D388−D \geq −388 so that ψ\psi can be efficiently computed.
  • When implementing curve's arithmetic, scalar multiplication should be written according to GLV algorithm (example).

Tools and Tips

To start with elliptic curve development install SageMath - an open-source python-based math-oriented programming environment that offers a comprehensive suite of tools that are essential for generating and testing elliptic curves. Likely, there's no need to implement everything from scratch. The researchers from SCIPR Lab already developed and packaged many of the popular methods into the ecfactory plugin. Though the recommended installation method didn't work for me with SageMath 9.0+, so I opted to add pip installation support in my fork.

Security considerations are essential to elliptic curve development. To check that a given curve satisfies current security requirements use SafeCurves, which essentially is a set of criteria for evaluating the security of elliptic curve cryptography. The criteria include standards for the properties of the curve itself such as its order, shape, and rigidity to various attacks, as well as properties of the underlying field such as size, stability, randomness, etc. The describe-curve tool can further help with SafeCurves checks but at the time of writing, it's still under development.

Resources

Demo

Here is the SageMath script for finding a pairing-friendly elliptic curve whose scalar field embeds the base field of Curve25519 using the Cocks-Pinch method.

Summary

The advances in blockchains and zero knowledge cryptography created a demand for new application-specific elliptic curves. This became especially relevant for arithmetic circuit development and recursive proof composition.

To reduce prover overhead when the verifier operates over a specific curve, the application can be expressed over a twisted Edwards whose base field matches the verifier's curve.

When the application logic requires certain curve arithmetic, then the same optimization is possible by using Cocks-Pinch or DEM methods to find a compatible pairing curve whose scalar field embeds application's curve base field.

To perform efficient recursive proving, an elliptic curve cycle is needed, which can be obtained using MNT method. A slightly more performant approach relies on Cocks-Pinch curves that form a chain, but this way the recursion depth is limited.

For developing elliptic curves use SageMath with ecfactory. To evaluate the security of newly founded curves use SafeCurves and describe-curve tools.

Timofey Yaluhin

Elliptic curves are special mathematical objects commonly defined by a cubic equation of the form y2=x3+ax+by^2 = x^3 + ax + b, where aa and bb are constants. Thanks to their mathematical properties, such as the ability to perform efficient arithmetic operations and the difficulty of solving discrete logarithm problem (DLP) on them, elliptic curves became ubiquitous in modern cryptography. Today elliptic curves can be found in digital signature schemes (DSA), key exchange mechanisms (KEM), zero knowledge proofs (ZKP), multi-party computation (MPC), and more.

The goal of this short post is to provide a brief overview of parameters that collectively specify an elliptic curve and give a somewhat opinionated classification of existing elliptic curves.

Anatomy of elliptic curves

The most defining characteristic of elliptic curves is their endomorphism ring, which is also the most abstract one. Namely, it's a set of mathematical operations that can be performed on the curve. These operations include point addition, scalar multiplication, and it gives information about the properties of the curve.

Order nn is the maximum number of points on the curve and is sometimes called cardinality.

Base field Fq\mathbb{F}_q of an elliptic curve is the field over which the curve is defined. The base field size qq thereby defines the number of elements of the finite field Fq\mathbb{F}_q.

Scalar field Fr\mathbb{F}_r is the field of scalars used in the operations performed on the curve, such as point addition, scalar multiplication, and pairing. The scalar field size rr is also the size of the largest subgroup of prime order. Interestingly, the Elliptic Curve DLP (ECDLP) of an elliptic curve is only as hard as that curve's largest prime order subgroup, not its order. However, when curve's order is prime, its largest prime subgroup is the group itself, so r=nr = n.

The following are three parameters that give a good taste of the curve's security and performance:

  • Relative parameter ρ=log  q/log  r\rho = log\;q/ log\;r measures the base field size relative to the size of the prime-order subgroup on the curve. Small ρ\rho is desirable to speed up arithmetic on the elliptic curve.
  • Cofactor h=n/rh = n/r measures curve's order relative to its largest prime subgroup. Cofactor of prime-order curves is always equal to 1. There's a trade-off where curves with cofactor tend to have faster and simpler formulas than that of prime-order curves, but are also more likely to be vulnerable to certain attacks like malleability.
  • Trace of Frobenius describes the size of a reduced curve and can be defined as q+1rq + 1 - r. It's used to better estimate the security of the elliptic curve and is useful when finding new curves with desirable properties.

The embedding degree is the smallest integer kk that lets you transform an instance of the ECDLP over an elliptic E(Fq)E(\mathbb{F}_q) into an instance of the DLP over the field Fqk\mathbb{F}_{q^k}. It's particularly important because the best known ECDLP attack O(n)O(\sqrt{n}) is Pollard's rho, while the best DLP attack is Index Calculus being sub-exponential in the field size. This kind of reduction is possible with pairing-friendly curves, so their qkq^k must be significantly larger than order nn. When k>1k > 1 we say that curve is defined over extension field of size qkq^k.

Security bits ss measure the difficulty of solving the DLP on a curve. For plain curves ss is roughly log2  rlog_2\;r. For pairing-friendly curves, rr must be selected such that log2  r2slog_2\;r \geq 2s because of Pollard’s rho algorithm, but due to ambiguity of Index Calculus attack complexity, estimating ss isn't as trivial: 2vec/9ln(kq)ln(ln(kq))232^v \cdot e^{\sqrt[3]{c/9\cdot ln(kq)\cdot ln(ln(kq))^2}} where the constants c=32c = 32 and v=7v = −7 (source).

Primitive point GG is a generator of the group Fq\mathbb{F}_{q}: all elements of the group can be expressed as G+G+...+GG+G+...+G (up to qq times). If a curve's order is prime, then all its points (except the point at infinity) are primitive.

Taxonomy of elliptic curves

There are many ways to classify elliptic curves: by their algebraic structure, geometric shape, cryptographic properties, security levels, etc. Let's start by looking at the properties of their endomorphism rings.

By the properties of their endomorphism rings

Ordinary elliptic curves have the endomorphism ring that is isomorphic (equivalent) to the ring of integers of a number field, i.e. all points are in the set of real integers.

Supersingular curves are elliptic curves whose order is not divisible by the characteristic of the field (the smallest positive integer mm such that for all elements aa in the field, a+a+...+aa+a+...+a (nn times) = 0).

Complex multiplication (CM) elliptic curves are curves whose points are created by multiplying their generator point on the complex multiplication constant. They naturally have a large number of points and can be used to generate large prime order subgroups.

Pairing curves are defined by a pair of elliptic curves G1\mathbb{G}_1, G2\mathbb{G}_2 and a bilinear map between their respective groups of points that map their points G1×G2GT\mathbb{G}_1 \times \mathbb{G}_2 \rightarrow \mathbb{G}_T. Curves with a small embedding degree k<40k < 40 and a large prime-order subgroup ρ2\rho \leq 2 are called pairing-friendly curves.

By their definition form

Weierstrass curves are defined as y2=x3+Ax+By^2 = x^3 + Ax + B. This is arguably the most common form for elliptic curves. Weierstrass curves initially lack full addition and were slower, but the gap has closed over time. Examples is BLS family (BLS12-381, BLS12-377).

Montgomery curves are defined as By2=x3+Ax2+xBy^2 = x^3 + Ax^2 + x. These curves are extremely efficient for elliptic curve multiplication (ECM) by being deliberately tailored for use with the Montgomery ladder. Using this algorithm it's possible to multiply any two points without failure cases. Although there are more performant methods to multiply a variable point on a fixed one, the Montgomery ladder remains practically unbeatable for multiplying two arbitrary points. A notable example is Curve25519.

Edwards curves are defined as Ax2+y2=1+Dx2y2Ax^2 + y^2 = 1 + Dx^2*y^2. Such curves gained huge popularity because were the first to implement full addition law, i.e. allowed to efficiently add any two points without failure cases. Complete addition formulas can simplify the task of an ECC implementer and, at the same time, can greatly reduce the potential vulnerabilities of a cryptosystem. An example of an Edwards curve is Ed25519.

Twisted Edwards curves are the generalization of the Edwards curves that include a special affine transformation which makes the curve "twisted" and thereby makes it more efficient for certain mapping operations such as the Elligator map and hash to curve. Interestingly, curves of this form are birationally equivalent to Montgomery curves, so it's common to find them by first specifying the Montgomery and then transforming it into Twisted Edwards form. Notable examples are Ed-on-BLS12-381 aka JubJub and Ed-on-BN254 aka Baby-Jubjub.

Summary

Elliptic curves are defined over two fields of finite order: the base field is used to represent points on a curve while the scalar field allows performing scalar multiplication on those points.

Characteristics such as relative parameter ρ\rho, cofactor, and trace can say a lot about the curve's security and performance. Estimating security bits of a given curve is generally trivial for plain curves but can be quite troublesome for pairing-friendly curves.

Elliptic curves can be classified by their endomorphism rings or by their definition form. There exist ordinary, supersingular, CM, and pairing-friendly curves all having a different endomorphism ring structure. When it comes to defining elliptic curve equations the most popular ways are Weierstrass, Montgomery, Edwards, Twisted Edwards forms.

Elizabeth Binks

Originally posted on Medium

ChainSafe is proud to announce the completion of a collaboration with xx network, one of the world’s first quantum-resistant and privacy-focused blockchain ecosystems.

A few months back, xx network enlisted our help to implement Winternitz One Time Signatures (W-OTS+) for Substrate-based chains and automated Placards generation. The primary goal of this engagement was to introduce post-quantum-security into signatures on Substrate.

Technical highlights
For context, W-OTS+ is a quantum-resistant digital signature scheme that uses relatively small key and signature sizes. And the xx network is a layer one blockchain co-founded by cryptography pioneer David Chaum. XX network is unique in its efforts to guard against cryptography-breaking quantum computing.

This collaboration had two focal points. The first is an implementation of an independent W-OTS+ crypto library that can be used in any context. The second is a Substrate Pallet and Off Chain-Worker Implementation to introduce the W-OTS+ signature scheme and Placards into a Substrate runtime environment.

Both implementations are novel to the Rust-Substrate ecosystem. There’s an existing Golang implementation, but to foster interoperability with Substrate-based chains, the W-OTS+ scheme has been written in Rust.

This package of technologies, therefore, can be regarded as an attractive option for future implementers from various language backgrounds.

Broader implications

W-OTS+ is suitable for post-quantum environments as it’s a hash-based signature scheme. Unlike the current signature schemes generally used in blockchains (e.g., ECDSA, Schnorr, BLS, etc.), which are based on the hardness of the discrete logarithm problem, hash-based schemes are based on the hardness of finding the preimage of a hash.

Unlike the former, there is no efficient quantum algorithm for the latter, making hash-based signature schemes quantum-resistant.

Hash-based cryptography is not the only approach being proposed for a post-quantum world. Isogeny and lattice-based approaches both hope to address the one-time-use limitation of hash-based signature schemes.

However, following the unexpected critical vulnerabilities recently discovered in a NIST finalist SIDH, hash-based cryptography remains a safe approach.

Given that W-OTS+ are one-time signatures, each time a signature is revealed, part of the private key can potentially be recovered. Thus, re-using a private key for multiple signatures eventually leaks the entire private key.

To work around this issue, for the xx network’s use case, a set of keys derived from the same seed and merkleized is published as a validator’s public key. Then, to validate a signature, a merkle proof of inclusion within the tree with the published root is used. The merkle tree of keys is referred to as a “Placard” in the implementation, which can be seen as a simplified version of the XMSS signature scheme.

Summary

As the need for post-quantum cryptography draws closer, the xx network is ensuring its privacy-preserving tech stack is quantum-secure.

This stack includes a private messenger app backed by a mixnet; a novel wallet generation mechanism called Sleeve, which allows embedding a W-OTS+ public key as a backup for any other cryptocurrency wallet — and now, the first step towards integrating quantum secure cryptography into the xx network’s Substrate-based blockchain.

The newly developed W-OTS+ crypto library already empowered the xx network team to implement Sleeve wallet generation in Rust and cross-compile it to WASM for usage in web browsers. Any user can now generate a quantum-ready wallet for any cryptocurrency directly in the xx network web-based wallet and staking app.

Timofey Yaluhin

Originally posted on Medium

Cross-chain applications

Interoperability solutions have shown great promise in unlocking the potential of decentralized applications in our emerging multi-chain ecosystem. However, due to the high volumes of digital assets and critical data flowing across them, blockchain bridges are amongst the most targeted protocols in the web3 space. This leaves researchers hunting for more advanced security designs.

Multi-Party Computation (MPC) is one of the most interesting potential solutions. Secure MPC represents a powerful next step in digital asset security because it eliminates the risks of a single point of compromise.

Instead of relying on Multisig and other (older) ways of key management that either expose relayer identities or introduce exploitable single-points-of-failure, relayers would run a secure MPC ceremony each time a user wishes to bridge funds or transfer arbitrary data.

In this way, MPC enables multiple parties to carry out a distributed computation on their secret inputs without revealing anything but the output.

This concept has been studied by academia for decades. Still, it’s only due to recent technological advancements that it has become viable for real-world applications like Sygma, the interoperability layer for building cross-chain applications.

Let’s unpack how MPC works, what makes it unique, and why we choose to adopt it.

Threshold cryptography: deep dive

Threshold Signature Schemes (TSS) is an area of MPC that we will focus on today. It’s particularly useful for crypto as it facilitates the distribution of a private key to multiple parties, introducing redundancy into asset management security.

In other words, it enables a set of parties to perform certain cryptographic operations, like signing transactions, while none of them holds a full private key. Instead, the key is split across the parties and can only be used when a subset of them — the size of which is larger than a certain threshold — combines their key shares.

Thanks to the homomorphic properties of the underlying scheme, a fully formed private key doesn’t ever need to exist. “Homomorphism” is just a fancy mathematical way to say the operations you can perform on the unencrypted (plaintext) values, like addition or multiplication, will behave identically on the encrypted (ciphertext) ones.

You can imagine the benefits of this for privacy.

For example, a user sends encrypted financial data to the server, and it responds with an encrypted credit score that only they would be able to decrypt. If that sounds interesting, see this article for more details and this library if you want to tinker with it.

An example

Imagine you have a secret key sk and a special algorithm that can divide this key into n pieces such that [ski][sk_i] = share_key(pk,n,tpk, n, t). Imagine now you want to sign a transaction m, so you apply a similar algorithm to get partial signatures [si][s_i] = sign(m,[ski]m, [sk_i]). Now, to reconstruct a valid signature, you would simply sum all partial signatures together s=s0+s1++sis = s_0 + s_1 + … + s_i and call it a day.

You might’ve also noticed a third argument t when we shared our key. Although the key is shared between n parties, we only need a threshold number of them to actually sign something. This is akin to a multisig scheme, which interestingly is just an emulation of threshold signatures using a high-level smart contract language like Solidity.

Of course, multisigs come with a cost where one would pay miners to process each call to the multisig contract. Conversely, threshold signatures are processed off-chain, and only a single compact ECDSA signature needs to be transacted on-chain once. Furthermore, such a signature won’t leak anything about its signers, which secures them from targeted attacks and is great for privacy.

When discussing security, MPC algorithms generally provide guarantees based on the threshold number of corrupted parties a system can tolerate. This places TSS in a unique position, as such schemes present the control of their robustness directly in the developer’s hands. Furthermore, this allows it to withstand even the dishonest majority — an extreme state where adversaries can corrupt all but one participant.

You may already know about the blockchain’s Scalability Trilemma and the Interoperability Trilemma of the cross-chain ecosystem. Let’s introduce a third one for the MPC domain — the Threshold Trilemma. Below are the three properties that MPC protocols try to maximize but can only have two of at the same time:

  • Risk minimization (robustness): the higher the threshold value set, the harder it is for parties to collude against the system, e.g., forge a signature.
  • Fault tolerance (liveness): the lesser the threshold value compared to the total number of parties, the more unforeseen failures such a system can tolerate, e.g., a peer accidentally going offline.
  • Performance (latency): the more parties the system employs, the more decentralized, secure, and reliable it would be, but at the expense of increasing performance overhead due to MPC’s high communication complexity.

Generally, protocol engineers prefer to minimize risk first and then balance liveness and performance based on the chosen threshold. However, it’s essential to base the threshold calculation on concrete metrics, such as the number of collateral nodes would have to stake or the amount of computation work needed for participation.

One last trick that can present an even greater degree of confidence in threshold cryptosystems is their unique “Key Reshare” mechanism — which allows parties from the old set to rotate key shares with new participants without changing the underlying public key. This is useful to onboard new members into the signing committee, but more importantly, it prevents hackers from corrupting parties one after another, potentially in the course of many sessions (known as proactive adversaries).

Applications of TSS in blockchains

There are many ways TSS is used today. Keyless wallets like ZenGo or Torus are making a more convenient and secure alternative — no more paper-written mnemonics or cloud-backed keys are needed. Instead, the wallet provider’s server/nodes would actively cooperate with the user to sign transactions.

The folks at Chainlink are using thresh-sigs to efficiently and verifiably aggregate data for their oracle network. Even some rollups like Skale Network are flirting with this technology, although instead of ECDSA, they use BLS signatures that require less MPC-intensive setup due to their elliptic curve pairing properties.

Probably the biggest beneficiaries of the TSS are a new generation of custodian services led by Fireblocks and Entropy. These companies aim to disrupt the way big institutions and foundations operate their escrows and treasuries.

Finally, threshold cryptography has great promise for various cross-chain applications. While designing Sygma, a new vision of cross-chain interoperability, we became convinced that having MPC for relayer communication will not only strengthen the overall security but also significantly reduce fees making the user experience much smoother.

Willem Olding

Originally posted on Medium

ChainSafe R&D is ChainSafe’s internal applied research and development arm. We provide high quality research to explore new technical and business frontiers in support of larger projects both internal and external to ChainSafe. The ChainSafe R&D team has previously completed (and in many cases continue to maintain) engagements with teams like Gitcoin (“decentralized Grants”), and Polygon (“v3 spec”), to name a few.

ChainSafe has recently concluded a 5-week sprint supporting development of the xx Network as they move toward their mainnet release.

The xx Network blockchain is built using Parity’s Substrate framework in Rust. This has allowed the team to make rapid progress developing the custom chain logic for xx Network while building on top of a battle tested and audited codebase in Substrate.

ChainSafe has been a long time supporter of Substrate with some of our projects including ChainBridge (a flexible solution for bridging Substrate and Ethereum chains), the PINT parachain, and our own runtime-compatible framework Gossamer. We jumped at the chance to help another promising project make it through the final hurdles. Supporting xx Network also highlights our growing capabilities as Substrate developers and code-reviewers in an extremely exciting space in blockchain development.

Network-layer privacy vs. Transaction-level privacy

In its first form, the xx Network blockchain serves to support and incentivize the already operational mixnet protocol cMix. Mixnets provide network-layer privacy by routing messages between mixing nodes, which effectively erase any link between the sender and the receiver. cMix requires global coordination and incentivization for mix-nodes, and the xx blockchain solves both of these! Nodes participating in consensus must also be active mix-nodes. Their performance as mixers directly affects what they can expect to receive as block rewards. You can learn more in the cMix whitepaper.

Network-layer privacy has seen much less attention in the crypto space compared with transaction-level privacy, as present in other blockchain projects such as ZCash and Monero. However, both will be required for us to one day be able to transact with total privacy, which includes end-to-end encryption and metadata shredding. Moreover, the network layer privacy offered by xx Network will power xx messenger, a private off-chain messaging app available on major mobile platforms shortly after mainnet launch.

The first release of the xx chain adds on-chain logic to support the operation of cMix as well as the unique economic logic for xx coin. Since the pre-sale for xx coin was instantiated on Ethereum as an ERC-1404 token, the network adapted Polkadot’s Claims module in order to allow users to receive their native xx coins at mainnet launch. Furthermore, xx Network plans to use ChainBridge to allow users to swap wrapped ERC20 xx coins on Ethereum to their native form on the xx blockchain.

Contributions from ChainSafe

As part of the engagement, all new additions were evaluated line-by-line, tests written to ensure full coverage (over 100 tests were added in total!), and benchmarking added to automatically derive weights for all externally callable functions. Several major issues were found and subsequently fixed by the team. Progress! The full report of the review will be available once the codebase is released to the public.

We look forward to the future of xx Network and the value they will bring to the blockchain ecosystem with their privacy preserving technology.

Acknowledgments

Thank you to Tim Ho and Bernardo Cardoso. Your contributions were invaluable to the making of this article.