Intro
This directory contains the Aztec Connect specifications.
The protocol is broken up into explicit subcomponents each with a specification:
- The rollup smart contract architecture
- Definitions of notes and nullifiers
- ZK-SNARK circuit primitives
- circuit specification for evaluating unsigned integer arithmetic
- Schnorr signature specification
- Account circuit specification
- Join-split circuit specification
- Defi claim circuit specification
- Rollup circuit specification (sometimes referred to as "inner rollup")
- Root rollup circuit specification (sometimes referred to as "outer rollup")
- Root verifier circuit specification
Cryptography Primitives
1. Pairing Groups on BN-254
Aztec uses the Ethereum-native version of the BN254 elliptic curve for its principal group:
First pairing source group
A BN curve of size , with field size , and security of roughly 110 bits (practically, this can be closer to 128 bits as the stronger attacks require unproven assumptions related to number field sive algorithms and have never been fully specified or implemented).
- Equation
- Parameters
- Field for prime in decimal representation (size)
- Group of prime order (size)
- Generator
We have . As usual, we use a subgroup of a twist of the above curve for efficient pairings:
Second pairing source group
A subgroup of size , of a curve over field size . This is a degree-2 field extension of , via . Note that is the ideal generated by , whose roots are .
- Equation
- Parameters
- Field for as above.
- Group = subgroup of of the same prime order as .
- Generator
Pairing
We use the Ethereum-native Ate pairing, a bilinear map taking:
Where is a field extension of of degree 12.
Further details may be found here.
2. Grumpkin - A curve on top of BN-254 for SNARK efficient group operations.
Grumpkin is in fact a curve cycle together with BN-254, meaning that the field and group order of Grumpkin are equal to the group and field order of (the first pairing group of) BN-254:
- Equation
- Parameters
- Group of order
- Base field for
3. Hashes
The Aztec 2.0 system relies on two types of hashes:
- Pedersen Hashes (for collision resistance)
- Blake2 Hashes (for pseudorandomness)
Aztec relies overwhelmingly on Pedersen Hashes; as most of the time collision resistance is sufficient.
Pedersen Hashes
Let be an additive group of prime order .
In its classical setting a pedersen hash is defined as a map as follows:
for generators chosen independently by public randomness (e.g. hueristically as distinct outputs of a random oracle simulating hash function).
We wish to define a variant of Pedersen to enable hashing strings of any desired length. As our group we will use the Grumpkin curve group described above.
We generate a sequence of generators as hash outputs -- these are network parameters, fixed for the life of the protocol. They are simply chosen to be the first Keccak-256 outputs that correspond to group elements. See the derive_generators
method in the barretenberg code and the Global Constants section below for exact details.
Hashing field elements
Our basic component for hashing will be the hash_single
method.
Given a field element and hash index , we essentially hash 252 bits of with and the the remaining 2 bits of with . This is not precisely the case, as we use a wnaf representation - see page 4 here. See the comments above hash_single
in the code for exact details. The point is that while enforcing the wnaf representation to represent an integer smaller than , this is a collision resistant function from under DL, even when outputting only the -coordinate.
Now, given a vector we define the pedersen hash as H(a_1,\ldots,a_t)=\sum_{i=1}^t \text{hash_single}(a_i,i).x
Hashing byte arrays:
Given a message of arbitrary size, we first divide it up into -byte chunks ; in other words:
We now identify each with a field element in the natural way. and now we define
For details on how have been generated, please see Global Constants.
Blake2s Hash
We use the Blake2s Hash more sparingly, because it is not SNARK-friendly, but it does exhibit psuedorandomness not offered by Pedersen. That is, it is considered a reasonable hueristic to use it in place of a random oracle used for a security proof.
We employ the standard implementation of the Blake2s hash, which is fully documented here.
The Blake2s hash is utilized for computing nullifiers and for generating pseudorandom challenges, when verifying Schnorr signatures and when recursively verifying Plonk proofs.
Pedersen Hash 'h' Elements
There are additionally elliptic curve group points used in the computation of Pedersen hashes.
For example:
- : used to compute hashes for large data strings with inputs of size
- are used for all Pedersen Hashes in the Note Tree and Nullifier Tree
The generator algorithm for computing the in pseudocode is:
counter = 0
h = []
do
{
compute x = keccak256(pad(i)), pad(i) = 32-byte pad of i
find y = sqrt (x³ + ax + b))
if y = error
{
\\ unsuccessful: point does not exist (50% chance)
}
else
{
\\ successful: point exists and add to list (50% chance)
set h[counter] = (x, y)
}
counter = counter + 1
}
while counter < 1024
Code Freeze Fall 2021: Schnorr signatures
tags: project-notes
The code is templated in such a way that the primes and are defined relative to the group G1
, which is unfortunate, since is chosen as a fixed, definite value in our specs. An alternative would be to have the templates in schnorr.tcc refer to F_nat
and F_em
(for 'native' and 'emulated') or something like this. The easier and probably better alternative for now is to just rename our primes in the Yellow Paper as and .
For Aztec's current uses cases, G1
is a cyclic subgroup of an elliptic curve defined over a field (implemented as a class Fq
), and Fr
(aka field_t
) is the a field of size equal to the size of G1
, so Fr
is the field acting on G1
by scalar multiplication.
Role:
Yellow paper only mentions them here: The Blake2s hash is utilized for computing nullifiers and for generating pseudorandom challenges, when verifying Schnorr signatures and when recursively verifying Plonk proofs.
They are used by the account circuit and the join-split circuit.
Data types
crypto::schnorr::signature
is a pair of two 256-bit integers represented as length-32 std::array
's of uint32_t
's.
crypto::schnorr::signature_b
is a pair of the same type.
wnaf_record<C>
is a vector of bool_t<C>
's along with a skew.
signature_bits<C>
is four field_t
's, representing a signature by splitting component into two.
Formulas
Elliptic curve addition.
We restrict in this code to working with curves described by Weierstrass equations of the form defined over a with prime. Consider two non-identity points , . If , then , so the two points are equal or one is the inverse of the other. If , then one has with . In the case of Grumpkin, the equation splits over , there are indeed distinct pairs of points satisfying this relation (for an example of how we handle this elsewhere in the code base, see https://github.com/AztecProtocol/aztec2-internal/issues/437).
Suppose . Then with where if and if .
Algorithms
Let be a generator of .
HMAC
We the HMAC algorithm as Pseudo-Random Function (PRF) to generate a randomly distributed Schnorr signature nonce in a deterministic way. HMAC is the Hash-based Message Authentication Code specification as defined in RFC4231.
The HMAC algorithm: Given a message , and a PRF key , the value is computed as
where:
- is a hash function modeling a random oracle, whose block size is 64 bytes
- is a block-sized key derived from .
- If is larger than the block size, we first hash it using and set
- Otherwise,
- In both cases, is right-padded with 0s up to the 64 byte block size.
- is a 64-byte string, consisting of repeated bytes valued
0x5c
- is a 64-byte string, consisting of repeated bytes valued
0x36
- denotes concatenation
- denotes bitwise exclusive or
- is a 32-byte string
In order to derive a secret nonce , we need to expand in order to derive a 512 bit integer Modeling as a uniformly sampled integer, taking ensures that the statistical distance between the distribution of and the uniform distribution over is negligible.
Sign
We use signatures with compression as described in Section 19.2.3 of [BS], in the sense that the signature contains the hash, meaning that the signature contains a hash and a field element, rather than a group element and a field element.
The algorithm: Given a message , an account produces the signature
where:
- .
- is the signer's secret nonce.
- , is a commitment to the signer's nonce .
- is the Fiat-Shamir response.
- is the affine representation of the signer's public key
- is a function interpreting a binary string as an integer and applying the modular reduction by .
- is a collisian-resistant pedersen hash function.
- is a hash function modeling a random oracle, which is instantiated with BLAKE2s.
The purpose of is to include the public key in the parameter whilst ensuring the input to is no more than 64 bytes.
Verify
Given , purported to be the signature of a messages by an account with respect to a random oracle hash function , compute
- ;
- .
The signature is verified if and only if , where the comparison is done bit-wise.
Imprecise rationale: The verification equation is where both sides of the equation are represented as an array of 256 bits. VERIFIER has seen that SIGNER can produce a preimage for a given which is outside of SIGNER's control by chosing a particular value of . The difficulty of this assumption is documented, in the case where is the units group of a finite field, in Schnorr's original paper [Sch] (cf especially pages 10-11).
Variable base multiplication
scalar presented as bit_array
scalar presented as a wnaf_record
, provided along with a current_accumulator
Code Paths
verify_signature
- There is an aborted state reached if and have the same x-coordinate.
- Normal signature verification path.
variable_base_mul(pub_key, current_accumulator, wnaf)
- This function is only called inside of
variable_base_mul(pub_key, low_bits, high_bits)
. There is aninit
predicate given by: "current_accumulator
andpub_key
have the same x-coordinate". This is intended as a stand-in for the more general check that these two points are equal. This condition distinguishes between two modes in which the function is used in the implementation of the functionvariable_base_mul(pub_key, low_bits, high_bits)
: on the second call, the conditioninit
is espected to be false, so that the results of the first call, recorded incurrent_accumulator
, are incorporated in the output. - There is branching depending on whether on the parity of the scalar represented by
wnaf
.
variable_base_mul(pub_key, low_bits, high_bits)
- There is an aborted state that is reached when either of the field elements is zero.
convert_signature(scontext, signature)
There is no branching here.
convert_message(context, message_string)
This function has not been investigated since I propose it be removed. It is not used or tested.
convert_field_into_wnaf(context, limb)
- When accumulating a
field_t
element using the proposed wnaf representaiton, there is branching at each bit position depending on the 32nd digit of the currentuint64_t
elementwnaf_entries[i+1]
.
Security Notes
Usage of HMAC for deterministic signatures
There are two main reasons why one may want deterministic signatures.
In some instances, the entropy provided by the system may be insufficient to guarantee uniform k
, and using HMAC
with a proper cryptographic hash function should therefore ensure this property.
By deriving it from the secret key, it also ensures that k
remains private to the signer.
Nowadays, and especially with the types of devices we would be creating signatures, we can assume that the system's randomness source is strong enough for creating signatures.
There are different ways of achieving this property, such as RFC 6979, or as defined by the EdDSA specification.
Our approach is closer to RFC 6979, though we do not use rejection sampling and instead generate a 512-bit value and apply modular reduction by .
This ensures that the statistical difference between the distribution of k
and the uniform distribution over is negligible.
Note that any leakage of the value of k
may be catastrophic, especially in ECDSA.
Unfortunately, by using the secret key for signing and as input to HMAC
, the original security proof of the signature scheme no longer applies.
We would need to derive two independent signing and PRF keys from one 256-bit secret seed.
Signature malleability
Given a valid signature , it is possible to generate another valid signature , where but (take to be congruent to modulo ).
In our context, signatures are used within the account
and join_split
circuits to link the public inputs to the user's spending key.
The signatures themselves are private inputs to the circuit and are not revealed. We do not depend on their non-malleability in this context.
The solution would be to check that .
Missing component in Pedersen hash
As mentioned, we use the collision-resistant Pedersen hash to compress and when computing the Fiat-Shamir challenge . We are aware that we do not embed the coordinate of and are working on a security proof to ensure this does not render the scheme insecure.
Biased sampling of Fiat-Shamir challenge
When we interpret as a field element by reducing the corresponding integer modulo , the resulting field element is slightly biased in favor of "smaller" field elements, since . Fixing this issue would require a technique similar to the method we use to derive without bias. Unfortunately, this would require many more gates inside the circuit verification algorithm (additional hash compuation and modular reduction of a 512 bit integer).
We are no longer in the random oracle model since the distribution of the challenge is not uniform. We are looking into alternative proofs to guarantee correctness.
Domain separation
We do not use domain separation when generating the Fiat-Shamir challenge with BLAKE2s. Other components using the same hash function as random oracle should be careful that this could not lead to collisions when similar inputs are being processed.
We also note that we do not hash the group generator into the hash function.
References
WNAF representation: https://github.com/bitcoin-core/secp256k1/blob/master/src/ecmult_impl.h, circa line 151
NOTE: the original NAF paper Morain, Olivos, "Speeding up the computations...", 1990 has a sign error in displayed equation (7). This is not present in our variable_base_mul
function.
[BS20] Boneh, D., Shoup, V "A Graduate Course in Applied Cryptography" Version 0.5, January 2020.
[Sch] Schnorr, C. "Efficient Identification and Signatures for Smart Cards", 1990.
Code Freeze Fall 2021: unsigned integers
tags: project-notes
A standard library uint
is a circit manifestation of a fixed width unsigned integer. The type is parameterized by a composer and one of the built-in types uintN_t
for N = 8, 16, 32, 64
. The value here is referred to as the width
of the type.
Shorthand: write uint_ct
for a generic uint<Composer, Native>
, and refer to an instance of such a class a simply a uint_ct
.
Role:
One wants such a type, for example, to implement traditional "numeric" hash functions, as we do for BLAKE2s and SHA256.
Data types
The state of a uint_ct
is described by the following protected
members:
Composer* context
mutable uint256_t additive_constant
: A component of the value.mutable WitnessStatus witness_status
: An indicator of the extent to which the instance has been normalized (see below).mutable std::vector<uint32_t> accumulators
: Accumulators encoding the base-4 expansion of the witness value, as produced byTurboComposer::decomposer_into_base4_accumulators
. This vector is populated when theuint_ct
is normalized. We record the values for later use in some operators (e.g., shifting).mutable uint32_t witness_index
: The index of a witness giving part of the value.
Key concepts
Value and constancy
Similar to the value of an instance of field_t<Composer>
, the value (a uint256_t
) of a uint_ct
consists of a "constant part" and possibly a witness value. To be precise, the function get_value
returns
(uint256_t(context->get_variable(witness_index)) + additive_constant) & MASK,
where MASK
enforces that the result is reduced modulo width
. There is also an "unbounded" version that does not mask off any overflowing values.
The value of a uint_ct
consists of a witness and a constant part . We will use this notation throughout. If the index of the witness is the special IS_CONSTANT
value, then is said to be constant.
Normalization
A naive implementation of the class uint_ct
would take field_t
and enrich it with structure to ensure that the value is always appropriately range constrained. Our implementation is more efficient in several ways.
We track an additive_constant
to reduce the number of divisions (by ) that must be recorded by the prover; for instance, if a uint is to be repeatedly altered by adding circuit constants , the circuit writer is happy to save the prover some effort by computing and, instead, asking the prover to demonstrate that they have computed the long division of by .
We also allow for the deferral of range constraints for efficiency reasons.
If is constant, then it is regarded as "normalized"--the prover does not need to provide any constraints on it showing that its value is of the appropriate width.
If is not constant, then it is allowed to exist in an 'unnormalized' state. By definition, normalizing means replacing it by a new uint_ct
with and proven to equal to . To prove this equation, one must impose the following two constraints:
- for some integers ;
- lies in the range .
We track whether these two constraints have been applied independently. If the first constrain has been applied, then is said to be weakly normalized. If both have been applied, is said to be noramlized. This status is tracked through an enum called WitnessStatus
that can take on three values.
Example: addition
Our function operator+
on uint_ct
s does not return a normalized value. Suppose we apply it to compute where are two uint_ct
s both having zero additive constants. Abusing notation to conflate a uint_ct
with its value, the constraints imposed by operator+
are: and That is, is only weakly normalized. Without appropriately range constraining , it is not known that is the remainder of division of by .
Suppose we know ahead of time that we actually want to compute with also having additive zero additive constant. Computing this sum as , the result is weakly normalized, backed by a constraint . Now suppose that normalized. Altogether, we
and . This shows that we can defer range constraints and correctly compute uint_ct
additions.
This, of course, has the tradeoff that the circuit writer must take care to manually impose range constraints when they are needed.
Descriptions of algorithms
Extensive comments were added to code to document complicated formulas and describe our algorithms. Some of the logic has been delegated to the widgets, having been 'absorbed', so to speak, into the protocol definition itself. In particular, create_balanced_add_gate
imposes an addition constraint and a range constraint, and this is described in the turbo arithmetic widget. Similarly create_big_add_gate_with_bit_extraction
extract bits information from a scalar represented in terms of two-bit 'quads'. The audit added around these TurboPLONK gates and widgets.
A reader coming to the task of understanding this code with little or no preparation is advised to begin bu reading the function TurboComposer::decompose_into_base4_accumulators
. This is the TurboPLONK function that imposes a range constraint by building a base-4 expansion of a given witness, recording this information in a vector of witnesses that accumulate to the given input (in the case of a correct proof). The decomposition there is used repeatedly for operations on uints (e.g., bit shifting).
Code Paths
There is branching in operator>
, where the conditions for >
and <=
are unified. This affects all of the other comparisons, which are implemented in terms of >
.
Otherwise, the code avoids branching as much as possible. Some circuit construction algorithms divide into cases, (e.g., whether a bit shift is by an even or an odd amount), but the predicates in those cases are known at compile time, not just at proving time.
Notes and Nullifiers
Global Constants
See constants.hpp and constants.hpp for constants.
Pedersen background
A note on pedersen hashing.
pedersen::commit
returns a point.pedersen::compress
returns the x-coordinate ofpedersen::commit
.
A different generator is used for each type of note and nullifier (including different generators for partial vs complete commitments). See the hackmd https://hackmd.io/gRsmqUGkSDOCI9O22qWXBA?view for a detailed description of pedersen hashing using turbo plonk.
Note: pedersen::compress
is collision resistant (see the large comment above the hash_single
function in the codebase, see the hackmd https://hackmd.io/urZOnB1gQimMqsMdf7ZBvw for a formal proof), so this can be used in place of pedersen::commit
for note commitments & nullifiers.
Notes and Commitments
Account note
An Account Note associates a spending key with an account. It consists of the following field elements. See the dedicated account_circuit.md for more details.
alias_hash
: the 224 bitalias_hash
account_public_key.x
: the x-coordinate of the account public keyspending_public_key.x
: the x-coordinate of the spending key that is been assigned to this account via this note.
An account note commitment is:
pedersen::compress(alias_hash, account_public_key.x, signing_pub_key.x)
- Pedersen GeneratorIndex:
ACCOUNT_NOTE_COMMITMENT
- Pedersen GeneratorIndex:
Value note
Consists of the following:
secret
: a random value to hide the contents of the commitment.owner.x
andowner.y
: the public key of the owner of the value note. This is a Grumpkin point.account_required
: Is the note linked to an existing account or can the note be spent without an account, by directly signing with the owner keycreator_pubkey
: Optional. Allows the sender of a value note to inform the recipient who the note came from.value
: the value contained in this note.asset_id
: unique identifier for the 'currency' of this note. The RollupProcessor.sol maps asset_id's with either ETH or the address of some ERC-20 contract.input_nullifier
: In order to create a value note, another value note must be nullified (except when depositing, where a 'gibberish' nullifier is generated). We include theinput_nullifier
here to ensure the commitment is unique (which, in turn, will ensure this note's nullifier will be unique).
partial commitment
pedersen::compress(secret, owner.x, owner.y, account_required, creator_pubkey)
- Pedersen GeneratorIndex:
VALUE_NOTE_PARTIAL_COMMITMENT
creator_pubkey
can be zero.
- Pedersen GeneratorIndex:
Note: The
secret
is to construct a hiding Pedersen commitment to hide the note details.
complete commitment
pedersen::compress(value_note_partial_commitment, value, asset_id, input_nullifier)
- Pedersen GeneratorIndex:
VALUE_NOTE_COMMITMENT
value
andasset_id
can be zero
- Pedersen GeneratorIndex:
In other words:
(The generator indexing is just for illustration. Consult the code.)
Claim note
Claim notes are created to document the amount a user deposited in the first stage of a defi interaction. Whatever the output token values of the defi interaction, the data in the claim note will allow the correct share to be apportioned to the user. See the claim circuit doc for more details.
Consists of the following:
deposit_value
: The value that the user deposited in the first stage of their defi interaction.bridge_call_data
: Contains an encoding of the bridge being interacted with.value_note_partial_commitment
: See the above 'value note' section.input_nullifier
: In order to create a claim note, a value note must be nullified as part of the 'defi deposit' join-split transaction. We include thatinput_nullifier
here to ensure the claim commitment is unique (which, in turn, will ensure this note's nullifier will be unique).defi_interaction_nonce
: A unique identifier for a particular defi interaction that took place. This is assigned by the RollupProcessor.sol contract, and emitted as an event.fee
: The fee to be paid to the rollup processor, specified as part of the defi deposit join-split tx. Half gets paid to process the defi deposit tx, and half to process the later claim tx.
partial commitment
pedersen::compress(deposit_value, bridge_call_data, value_note_partial_commitment, input_nullifier)
- Pedersen GeneratorIndex:
CLAIM_NOTE_PARTIAL_COMMITMENT
bridge_call_data
can be zero.
- Pedersen GeneratorIndex:
complete commitment
pedersen::compress(claim_note_partial_commitment, defi_interaction_nonce, fee)
- Pedersen GeneratorIndex:
CLAIM_NOTE_COMMITMENT
fee
anddefi_interaction_nonce
could be zero.
- Pedersen GeneratorIndex:
Defi Interaction note
A defi interaction note records the details of a particular defi interaction. It records the total deposited by all users and the totals output by the defi bridge. These totals get apportioned to each user based on the contents of each user's claim note.
Consists of the following:
bridge_call_data
: Contains an encoding of the bridge that was interacted with.total_input_value
: The total deposited to the bridge by all users who took part in this defi interaction.total_output_value_a
: The sum returned by the defi bridge denominated in 'token A'. (The details of 'token A' are contained in thebridge_call_data
).total_output_value_b
: The sum returned by the defi bridge denominated in 'token B'. (The details of 'token B' are contained in thebridge_call_data
).interaction_nonce
: (a.k.a. defi interaction nonce) A unique identifier for a particular defi interaction that took place. This is assigned by the RollupProcessor.sol contract, and emitted as an event.interaction_result
: true/false - was the L1 transaction a success?
commitment
pedersen::compress(bridge_call_data, total_input_value, total_output_value_a, total_output_value_b, interaction_nonce, interaction_result)
- Pedersen GeneratorIndex:
DEFI_INTERACTION_NOTE_COMMITMENT
- Pedersen GeneratorIndex:
Note encryption and decryption
Details on this are found here
Nullifiers
Value note nullifier
Objectives of this nullifier:
- Only the owner of a note may be able to produce the note's nullifier.
- No collisions. Each nullifier can only be produced for one value note commitment. Duplicate nullifiers must not be derivable from different note commitments.
- No collisions between nullifiers of other notes (i.e. claim notes or defi interaction notes).
- No double-spending. Each commitment must have one, and only one, nullifier.
- The nullifier must only be accepted and added to the nullifier tree if it is the output of a join-split circuit which 'spends' the corresponding note.
Calculation We set out the computation steps below, with suggestions for changes:
hashed_pk = account_private_key * G
(whereG
is a generator unique to this operation).- This
hashed_pk
is useful to demonstrate to a 3rd party that you've nullified something without having to provide your secret key.
- This
compressed_inputs = pedersen::compress(value_note_commitment, hashed_pk.x, hashed_pk.y, is_real_note)
- This compression step reduces the cost (constrain-wise) of the blake2s hash which is done next.
nullifier = blake2s(compressed_inputs);
- blake2s is needed, because a pedersen commitment alone can leak data (see comment in the code for more details on this).
Pedersen GeneratorIndex:
JOIN_SPLIT_NULLIFIER_ACCOUNT_PRIVATE_KEY
for the hashed_pkJOIN_SPLIT_NULLIFIER
to compress the inputs
Claim note nullifier
Objectives of this nullifier:
- Anyone (notably the rollup provider) may be able to produce this nullifier.
- No collisions. Each nullifier can only be produced for one claim note commitment. Duplicate nullifiers must not be derivable from different claim note commitments.
- No collisions between nullifiers of other notes (i.e. value notes or defi interaction notes).
- This nullifier must only be added to the nullifier tree if it is the output of a claim circuit which 'spends' the corresponding claim note.
- No double-spending. Each claim note commitment must have one, and only one, nullifier.
Calculation
nullifier = pedersen::compress(claim_note_commitment);
- Note: it is ok that observers can see which claim note is being nullified, since values in a defi interaction are public (only owners are private). Furthermore, the rollup priovider needs to be able to generate the claim proof and doesn't have access to any user secrets - so this nullifier allows this use case.
- Pedersen GeneratorIndex:
CLAIM_NOTE_NULLIFIER
Defi Interaction nullifier
Objectives of this nullifier:
- This is not a 'conventional' nullifier, in the sense that it doesn't prevent others from 'referring' to the defi interaction note. It's really only needed so that something unique may be fed into the
output_note_2
output of the claim circuit. - Anyone (notably the rollup provider) may be able to produce a valid nullifier on behalf of any user who partook in the corresponding defi interaction.
- No collisions between nullifiers of other notes (i.e. value notes or claim notes).
- This nullifier must only be added to the nullifier tree if it is the output of a claim circuit which 'refers' the corresponding defi interaction note note and 'spends' a claim note which was created during that defi interaction.
Calculation:
nullifier = pedersen::compress(defi_interaction_note_commitment, claim_note_commitment);
- Pedersen GeneratorIndex:
DEFI_INTERACTION_NULLIFIER
Defi Bridge Contract Interface
Types
library AztecTypes {
enum AztecAssetType {
NOT_USED,
ETH,
ERC20,
VIRTUAL
}
struct AztecAsset {
uint256 id;
address erc20Address;
AztecAssetType assetType;
}
}
The AztecAsset
struct is an attempt at a more developer-friendly description of an Aztec asset that does not rely on bit flags.
The type of the asset is described by an enum. For virtual or not used assets, the erc20Address
will be 0.
For input virtual assets, the id
field will contain the interaction nonce of the interaction that created the asset.
For output virtual assets, the id
field will be the current interaction nonce.
External Methods
convert
Initiate a DeFi interaction and inform the rollup contract of the proceeds. If the DeFi interaction cannot proceed for any reason, it is expected that the convert method will throw.
function convert(
AztecTypes.AztecAsset memory inputAssetA,
AztecTypes.AztecAsset memory inputAssetB,
AztecTypes.AztecAsset memory outputAssetA,
AztecTypes.AztecAsset memory outputAssetB,
uint256 totalInputValue,
uint256 interactionNonce,
uint64 auxData
)
external
payable
override
returns (
uint256 outputValueA,
uint256 outputValueB,
bool _isAsync
)
Input Values:
Name | Type | Description |
---|---|---|
inputAssetA | AztecAsset | first input asset |
inputAssetB | AztecAsset | second input asset. Either VIRTUAL or NOT_USED |
outputAssetA | AztecAsset | first output asset. Cannot be virtual |
outputAssetB | AztecAsset | second output asset. Can be real or virtual (or NOT_USED) |
totalInputValue | uint256 | The amount of inputAsset this bridge contract is allowed to transfer from the rollup contract. |
interactionNonce | uint256 | The current defi interaction nonce |
auxData | uint64 | Custom auxiliary metadata |
Return Values:
Name | Type | Description |
---|---|---|
outputValueA | uint256 | The amount of outputAssetA the rollup contract will be able to transfer from this bridge contract. Must be greater than 0 if numOutputAssets is 1. |
outputValueB | uint256 | The amount of outputAssetB the rollup contract will be able to transfer from this bridge contract. Must be 0 if numOutputAssets is 1. |
In the unfortunate event when both output values are zeros, this function should throw so that the rollup contract could refund inputValue
back to the users.
BridgeCallData is a 250-bit concatenation of the following data (starting at the most significant bit position):
bit position | bit length | definition | description |
---|---|---|---|
0 | 64 | auxData | custom auxiliary data for bridge-specific logic |
64 | 32 | bitConfig | flags that describe asset types |
96 | 32 | openingNonce | (optional) reference to a previous defi interaction nonce (used for virtual assets) |
128 | 30 | outputAssetB | asset id of 2nd output asset |
158 | 30 | outputAssetA | asset id of 1st output asset |
188 | 30 | inputAsset | asset id of 1st input asset |
218 | 32 | bridgeAddressId | id of bridge smart contract address |
Bit Config Definition | bit | meaning | | --- | --- | | 0 | firstInputVirtual | | 1 | secondInputVirtual | | 2 | firstOutputVirtual | | 3 | secondOutputVirtual | | 4 | secondInputReal | | 5 | secondOutputReal |
Account Circuit
Background
Aztec accounts are different from Ethereum addresses, mainly because deriving an Ethereum address is expensive (constraint-wise) within a circuit. Also, Aztec accounts have several extra features:
- A human-readable name (an
alias
) can be associated with an account public key. - Multiple (unlimited) spending keys (a.k.a. signing keys) can be associated with an
alias
and itsaccount_public_key
, to enable users to more-easily spend from multiple devices (for example). - Spending keys can also be used for account recovery (e.g. with the aid of a 3rd party).
- If the account private key is compromised, a user can migrate to a new
account_public_key
. (They would also need to transfer all of their existing value notes to be owned by this newaccount_public_key
). - If a spending private key is compromised, a user can also migrate to a new
account_public_key
, and a brand new set of spending keys can be associated with this newaccount_public_key
. (They would also need to transfer all of their existing value notes to be owned by this newaccount_public_key
).
Keys:
- Spending/signing keys are used to spend value notes.
- Account keys are used to decrypt encrypted value note data.
- Also, initially (before any alias or signing keys are linked to the account), the 0th account key serves as a spending key for a user's value notes. Thereafter only spending keys can be used to spend notes.
See the diagram (below) for derivations of the various keys.
Keys
Key Name | Derivation |
---|---|
eth_private_key | Random 256 bits |
eth_public_key | eth_private_key * secp256k1.generator |
eth_address | The right-most 160-bits of keccak256(eth_public_key) |
account_private_key | The first 32-bytes of the signature:eth_sign("\x19Ethereum Signed Message:\n" + len(message) + message, eth_address) *where message = "Sign this message to generate your Aztec Privacy Key. This key lets the application decrypt your balance on Aztec.\n\nIMPORTANT: Only sign this message if you trust the application." *using a client which has access to your eth_address 's private key, for signing. |
account_public_key | account_private_key * grumpkin.generator |
spending_private_key a.k.a. signing_private_key | Random 256 bits |
spending_public_key a.k.a. signing_public_key | spending_private_key * grumpkin.generator |
Account Glossary
Name | Definition | Description |
---|---|---|
account | An account is generally used to mean an (alias_hash, account_public_key) pair. | |
alias | E.g. alice | Some unique human-readable string. |
alias_hash | The first 28-bytes of blake2s_to_field(alias) .QUESTION: how does the output of blake2s get mapped to a field element? | A constant-sized representation of an alias , for use in circuits. |
account_note | { | Links together a user's alias_hash , their account_public_key and one of their spending_public_key s.A user can register multiple account notes as a way of registering multiple spending_public_keys against their account. They might, for example, want to be able to spend from different devices without needing to share keys between them.A user can also create a new account note as a way of registering a new account_public_key against their alias_hash . Ideally, a user would use just one account_public_key at a time (and transfer all value notes to be owned by that account_public_key ), but this is not enforced by the protocol. |
account_note.commitment | pedersen::compress( | Each account note commitment is stored in the data tree, so that our circuits can check whether spending and account keys have been correctly registered and actually belong to the user executing the transaction. |
alias_hash_nullifier | pedersen::compress( | This nullifier is added to the nullifier tree (by the rollup circuit) when executing the account circuit in create mode. It prevents an alias from ever being registered again by another user. |
account_public_key_nullifier | pedersen::compress( | This nullifier is added to the nullifier tree (by the rollup circuit) when executing the account circuit in create or migrate modes. It prevents an account_public_key from ever being registered again by another user. |
Modes: create, update, migrate
The account circuit can be executed in one of three 'modes':
- Create
- Used to register a new
alias
. - A new 'account' is registered by generating nullifiers for a new
alias_hash
and a newaccount_public_key
. This ensures thealias_hash
andaccount_public_key
haven't already been registered by someone else. - Two new
account_notes
may be created, as a way of registering the first two newspending_public_keys
against the new account. - The circuit enforces that the caller knows the private key of
account_public_key
, by checking that a signature over the circuit's inputs has been signed by theaccount_private_key
. We need to do this, in part, because the owner of thisaccount_public_key
might already have been sent value notes, even before registering it with Aztec Connect. -
Note: there are no protocol checks to ensure these new
spending_public_keys
(which are added toaccount_notes
) are new or unique. -
Note: There are no protocol checks during
create
, to ensure the user knows private keys to thesespending_public_keys
.
- Used to register a new
- Update
- Used to add additional spending keys to an account.
- Every account tx in
update
mode adds up-to two new spending keys to an account. - Two new
account_notes
are created, as a way of registering the two newspending_public_keys
against the account. - No nullifiers are produced.
- The circuit enforces that the caller knows the private key of an existing
signing_public_key
for this account, by:- checking that a signature over the circuit's inputs has been signed by a
signing_private_key
; and - checking that this
signing_public_key
is contained in anaccount_note
's commitment and that this commitment exists in the data tree.
- checking that a signature over the circuit's inputs has been signed by a
-
Note: There are no protocol checks during
update
, to ensure the user knows private key to theaccount_public_key
.
- Migrate
- Used to update a user's
account_public_key
without changing theiralias
. - The new 'account' is registered by generating a nullifier for the new
account_public_key
. - Two new
account_notes
may be created, as a way of registering the first two newspending_public_keys
against this new account. - The circuit enforces that the caller knows the private key of an existing
signing_public_key
for this account, by:- checking that a signature over the circuit's inputs has been signed by a
signing_private_key
; and - checking that this
signing_public_key
is contained in anaccount_note
's commitment and that this commitment exists in the data tree.
- checking that a signature over the circuit's inputs has been signed by a
-
Note: There are no protocol checks during
migrate
, to ensure the user knows private key to theaccount_public_key
.
- Used to update a user's
When to migrate?
If a user, Alice, suspects their account_private_key
or pending_private_key
have been compromised, then they should run the account circuit in migrate
mode. As already stated, this will associate a new account_public_key
to their alias
and allow them to register new spending_public_keys
against this new account_public_key
. Two new account notes get created by the account circuit in migrate
mode.
HOWEVER, the previous, 'old' account notes (containing the 'old' compromised key(s)), DO NOT get nullified. They are forever 'valid' notes in the data tree. Therefore, if Alice still owns value notes which are owned by one of her old account_public_keys
, an attacker (who somehow knows the private key and a corresponding old spending_private_key
) would still be able to spend such value notes. Therefore, after migrating their account, a user MUST ALSO transfer all of their existing notes to be owned by their new account_public_key
.
Example of account circuit modes
Each row of the table shows the data created by one execution of the account circuit. Rows are chronologically ordered.
Mode | alias | alias_hash | account public key | new spending keys | signer | new alias_hash_ nullifier emitted | new account_ public_key_ nullifier emitted | new account note commitments |
---|---|---|---|---|---|---|---|---|
create | alice | h(alice) | apk_1 | spk_1a, spk_1b | apk_1 | h(h(alice)) | h(apk_1.x, apk_1.y) | h(h(alice), apk_1, spk_1a) h(h(alice), apk_1, spk_1b) |
update | alice | h(alice) | apk_1 | spk_1c, spk_1d | spk_1b (e.g.) | - | - | h(h(alice), apk_1, spk_1c) h(h(alice), apk_1, spk_1d) |
update | alice | h(alice) | apk_1 | spk_1e, spk_1f | spk_1a (e.g.) | - | - | h(h(alice), apk_1, spk_1e) h(h(alice), apk_1, spk_1f) |
migrate | alice | h(alice) | apk_2 | spk_2a, spk_2b | spk_1d (e.g.) | - | h(apk_2.x, apk_2.y) | h(h(alice), apk_2, spk_2a) h(h(alice), apk_2, spk_2b) |
update | alice | h(alice) | apk_2 | spk_2c, spk_2d | spk_2b (e.g.) | - | - | h(h(alice), apk_2, spk_2c) h(h(alice), apk_2, spk_2d) |
Note:
h
is lazy notation, being used interchangeably in this table for different hashes. Consult the earlier tables or the below pseudocode for clarity on which hashes specifically are used. Note: after an accountmigrate
, all previous value notes should be transferred (via the join-split circuit) to be owned by the new account public key.
More on Nullifiers
Unlike the join-split circuit (for example), which always produces nullifiers, the account circuit only conditionally produces nullifiers (see the different modes above). It's possible for nullifier_1
or nullifier_2
to be 0
:
-
nullifier_1 = create ? pedersen::compress(account_alias_hash) : 0;
-
nullifier_2 = (create || migrate) ? pedersen::compress(account_public_key) : 0
Note: The rollup circuit for Aztec Connect permits unlimited
0
nullifiers to be added to the nullifier tree, because:
- Each nullifier is added to the nullifier tree at the leafIndex which is equal to the nullifier value.
- So the rollup circuit will try to add
nullifier = 0
toleafIndex = 0
.- First it checks whether the leaf is empty. Well
0
implies "empty", so this check will pass, and the value0
will be once-again added to the 0th leaf.
Diagram
Here's a detailed diagram of how all of Aztec's different keypairs are derived, and the flow of account creation and migration. (Edits are welcome - let Mike know if the link doesn't work).
The circuit
Account Circuit: Worked Example
There's a little diagram at the diagrams link too.
- Alice generates a grumpkin key pair
(account_private_key, account_public_key)
. - Alice can receive funds prior to registering an
alias
at(account_public_key)
- I.e. a sender can send Alice funds by creating a value note with preimage values:
owner = account_public_key
requires_account = false
- I.e. a sender can send Alice funds by creating a value note with preimage values:
- Alice can register the alias
alice
against heraccount_public_key
using the account circuit.- The
alias_hash = hash('alice')
gets nullified, effectively 'reserving' the aliasalice
to prevent anyone else using it. - The
account_public_key
gets nullified, to prevent anyone else using it. - Alice's
new_account_public_key
, heralias_hash
, and two new spending keys, are all linked together via two new account notes which get added to the data tree.
- The
- Alice must then transfer any previously-received funds that were sent to
(account_public_key)
(i.e. value notes whererequires_account = false
), to value notes whose primage values contain(account_public_key, requires_account = true)
. - Alice can register unlimited additional spending keys to
(alice, account_public_key)
, via additional calls to the account circuit (inupgrade
mode). - If a
spending_public_key
becomes compromised, Alice must do the following:
- generate a new account note with a
new_account_public_key
and her existingalice
alias (using themigrate
flow). The new account note's spending keys SHOULD be different to the compromised key (although there are no protocol checks to enforce this). - Use the account
update
flow to assign additional non-comprimised spending keys to her new account note`. - Alice transfers funds assigned to
(account_public_key, alice)
and sends them to(new_account_public_key, alice)
- Similarly, if Alice's
account_private_key
becomes compromised, she can use the account circuit to migrate to a newaccount_public_key
.
Circuit Inputs: Summary
The inputs for the account circuit are:
As previously, the field is from the BN254 specification.
Public Inputs: Detail
Recall that all inner circuits must have the same number of public inputs as they will be used homogenously by the rollup circuit. Hence, most of the account circuit's public inputs are 0, because they're not actually needed for the account circuit's functionality.
proof_id = PublicInputs::ACCOUNT
(i.e. this is effectively a witness which can only take one valid value).output_note_commitment_1
output_note_commitment_2
nullifier_1
nullifier_2
public_value = 0
public_owner = 0
asset_id = 0
data_tree_root
tx_fee = 0
tx_fee_asset_id = 0
bridge_call_data = 0
defi_deposit_value = 0
defi_root = 0
backward_link = 0
allow_chain = 0
Private Inputs: Detail
account_public_key
new_account_public_key
signing_public_key
signature
new_signing_public_key_1
new_signing_public_key_1
alias_hash = blake2s(alias).slice(0, 28)
account_nonce
create
(bool)migrate
(bool)account_note_index
account_note_path
Circuit Logic (Pseudocode)
Computed vars:
signer
=signing_public_key
message
=pedersen::compress(alias_hash, account_public_key.x, new_account_public_key.x, spending_public_key_1.x, spending_public_key_2.x, nullifier_1, nullifier_2)
account_note_commitment
=pedersen::compress(alias_hash, account_public_key.x, signer.x)
Computed public inputs:
output_note_commitment_1
=pedersen::compress(alias_hash, new_account_public_key.x, spending_public_key_1.x)
output_note_commitment_2
=pedersen::compress(alias_hash, new_account_public_key.x, spending_public_key_2.x)
nullifier_1
=create ? pedersen::compress(alias_hash) : 0;
nullifier_2
=create || migrate ? pedersen::compress(new_account_public_key)
Circuit constraints:
create == 1 || create == 0
migrate == 1 || migrate == 0
require(create && migrate == 0)
require(new_account_public_key != spending_public_key_1)
require(new_account_public_key != spending_public_key_2)
if (migrate == 0) { require(account_public_key == new_account_public_key) }
verify_signature(message, signer, signature) == true
if (create == false) { require(membership_check(account_note_data, account_note_index, account_note_path, data_tree_root) == true) }
- Assert all 'zeroed' public inputs are indeed zero.
JoinSplit Circuit
Circuit Description
This circuit allows notes to be spent.
The circuit takes in two input notes, and two new output notes, and updates the Note Tree and Nullifier Tree accordingly.
Circuit Inputs: Summary
The inputs for the join-split circuit are all elements of the field from the BN254 specification.
Public Inputs: Detail
-
proof_id
-
output_note_commitment_1
-
output_note_commitment_2
-
nullifier_1
-
nullifier_2
-
public_value
-
public_owner
-
public_asset_id
-
old_data_tree_root
-
tx_fee
-
tx_fee_asset_id
-
bridge_call_data
-
defi_deposit_value
-
defi_root
// Note: this will not be used by the circuit, but is included so that the number of public inputs is uniform across base-level circuits. -
backward_link
-
allow_chain
Private Inputs: Detail
{
asset_id,
num_input_notes,
input_note_1_index,
input_note_2_index,
input_note_1_path,
input_note_2_path,
input_note_1: {
value,
secret,
owner,
asset_id,
account_required,
creator_pk,
input_nullifier,
},
input_note_2: {
value,
secret,
owner,
asset_id,
account_required,
creator_pk,
input_nullifier,
},
output_note_1: {
value,
secret,
owner,
asset_id,
account_required,
creator_pk, // (creator_pk = optional public key of note creator)
input_nullifier,
},
output_note_2: {
value,
secret,
owner,
asset_id,
account_required,
creator_pk, // (creator_pk = optional public key of note creator)
input_nullifier,
},
partial_claim_note_data: {
deposit_value,
bridge_call_data_local: {
bridge_address_id,
input_asset_id_a,
input_asset_id_b,
output_asset_id_a,
output_asset_id_b,
config: {
second_input_in_use,
second_output_in_use,
},
aux_data,
},
note_secret,
input_nullifier,
},
account_private_key,
alias_hash,
account_required,
account_note_index,
account_note_path,
signing_pk, // (a.k.a. spending public key)
signature,
}
Index of Functions
In the Pseudocode to follow, we use the following function names. See notes & nullifiers for more details.
public_key()
derives a public key from a given secret key.value_note_commit()
- Value note commitment function, which is assumed to be- Collision-resistant
- Field-friendly, which means the output value only depends on the inputs as field elements, and doesn’t change e.g. when input changes from a to a+r as bit string.
partial_value_note_commit()
- Partial value note commitment function. Has the same assumptions asvalue_note_commit
. Uses a different generator. Stresses that the data being committed to is partial - a subset of the data committed to byvalue_note_commit
.partial_claim_note_commit()
- Partial claim note commitment function. Has the same assumptions asvalue_note_commit
. Uses a different generator. Stresses that the data being committed to is partial - a subset of the data committed to byclaim_note_commit
(in the claim circuit).account_note_commit()
- Account note commitment function, which is assumed to be collision resistant.compute_nullifier()
- Nullifier Function, which we assume can be modeled as a random oracle, and only depends onaccount_private_key
.
Circuit Logic (Pseudocode)
// range checks:
for i = 1,2:
{
check:
input_note_i_index < 2 ** DATA_TREE_DEPTH
input_note_i.value < 2 ** NOTE_VALUE_BIT_LENGTH
output_note_i.value < 2 ** NOTE_VALUE_BIT_LENGTH
}
partial_claim_note.deposit_value < 2 ** DEFI_DEPOSIT_VALUE_BIT_LENGTH
asset_id < 2 ** MAX_NUM_ASSETS_BIT_LENGTH
public_value < 2 ** NOTE_VALUE_BIT_LENGTH
tx_fee < 2 ** TX_FEE_BIT_LENGTH
account_note_index < 2 ** DATA_TREE_DEPTH
alias_hash < 2 ** ALIAS_HASH_BIT_LENGTH
account_required < 2
num_input_notes in {0, 1, 2}
allow_chain in {0, 1, 2, 3}
// tx type initialisations:
const is_deposit = proof_id == DEPOSIT
const is_withdraw = proof_id == WITHDRAW
const is_send = proof_id == SEND
const is_defi_deposit = proof_id == DEFI_DEPOSIT
const is_public_tx = is_deposit || is_withdraw
// public value initialisations
const public_asset_id = is_public_tx ? asset_id : 0;
const public_input = is_deposit ? public_value : 0;
const public_output = is_withdraw ? public_value : 0;
// account initialisations
const account_pk = public_key(account_private_key);
const signer_pk = account_required ? signing_pk.x : account_pk.x;
const account_note = {
alias_hash,
account_pk,
signer_pk,
};
const account_note_commitment = account_note_commit(account_note);
// commitments
for i in 1,2
{
input_note_i.commitment = value_note_commit(input_note_i);
output_note_i.commitment = value_note_commit(output_note_i);
}
// Data validity checks:
require(num_input_notes = 0 || 1 || 2); // it's pseudocode!
require(is_deposit || is_send || is_withdraw || is_defi_deposit);
if(num_input_notes == 0) require(is_deposit);
if (is_public_tx) {
require(public_value > 0);
require(public_owner > 0);
} else {
require(public_value == 0);
require(public_owner == 0);
}
require(input_note_1.commitment != input_note_2.commitment);
require(
(asset_id == input_note_1.asset_id) &&
(asset_id == output_note_1.asset_id) &&
(asset_id == output_note_2.asset_id) &&
);
if (
(num_input_notes == 2) &&
!is_defi_deposit
) {
require(input_note_1.asset_id == input_note_2.asset_id);
}
require(account_private_key != 0);
const account_public_key = public_key(account_private_key);
require(
account_public_key == input_note_1.owner &&
account_public_key == input_note_2.owner
);
require(
account_required == input_note_1.account_required &&
account_required == input_note_2.account_required
);
if (output_note_1.creator_pubkey) {
require(account_public_key == output_note_1.creator_pubkey);
}
if (output_note_2.creator_pubkey) {
require(account_public_key == output_note_2.creator_pubkey);
}
// Defi deposit
let output_note_1_commitment = output_note_1.commitment; // supersedes output_note_1.commitment frin here on in.
let input_note_2_value = input_note_2.value; // supersedes input_note_2.value from here on in.
let output_note_1_value = output_note_1.value;
let defi_deposit_value = 0;
if (is_defi_deposit) {
const partial_value_note = {
secret: partial_claim_note_data.note_secret,
owner: input_note_1.owner,
account_required: input_note_1.account_required,
creator_pubkey = 0,
};
const partial_value_note_commitment = partial_value_note_commit(partial_value_note);
const partial_claim_note = {
deposit_value: partial_claim_note_data.deposit_value,
bridge_call_data: partial_claim_note_data.bridge_call_data_local.to_field(),
partial_value_note_commitment,
input_nullifier: partial_claim_note_data.input_nullifier,
}
const partial_claim_note_commitment = partial_claim_note_commit(partial_claim_note)
output_note_1_commitment = partial_claim_note_commitment;
defi_deposit_value = partial_claim_note.deposit_value;
require(defi_deposit_value > 0);
const { bridge_call_data_local } = partial_claim_note_data;
const bridge_call_data = bridge_call_data_local.to_field();
require(bridge_call_data_local.input_asset_id_a == input_note_1.asset_id);
if (input_note_2_in_use && (input_note_1.asset_id != input_note_2.asset_id)) {
require(defi_deposit_value == input_note_2.value);
require(bridge_call_data_local.config.second_input_in_use);
input_note_2_value = 0; // set to 0 for the 'conservation of value' equations below.
}
if (bridge_call_data_local.config.second_input_in_use) {
require(input_note_2_in_use);
require(input_note_2.asset_id == bridge_call_data_local.input_asset_id_b);
}
output_note_1_value = 0; // set to 0, since the partial claim note replaces it.
}
// Conservation of value: no value created or destroyed:
const total_in_value = public_input + input_note_1.value + input_note_2_value
const total_out_value = public_output + (is_defi_deposit ? defi_deposit_value : output_note_1_value) + output_note_2.valuue
// fee
const tx_fee = total_in_value - total_out_value // (no underflow allowed)
// Check input notes are valid:
let input_note_1_in_use = num_input_notes >= 1;
let input_note_2_in_use = num_input_notes == 2;
for i = 1,2:
{
if (input_note_i_in_use) {
const input_note_commitment_i = value_note_commit(input_note_i);
const exists = check_membership(
input_note_commitment_i, input_note_i_index, input_note_i_path, old_data_tree_root
);
require(exists);
} else {
require(input_note_i.value == 0);
}
}
// Compute nullifiers
for i = 1,2:
{
nullifier_i = compute_nullifier(
input_note_i.commitment,
account_private_key,
input_note_i_in_use,
);
}
require(
output_note_1.input_nullifier == nullifier_1 &&
output_note_2.input_nullifier == nullifier_2 &&
partial_claim_note.input_nullifier == is_defi_deposit ? nullifier_1 : 0;
)
// Verify account ownership
check_membership(account_note_commitment, account_note_index, account_note_path, old_data_tree_root);
message = (
public_value,
public_owner,
public_asset_id,
output_note_1_commitment, // notice this is NOT output_note_1.commitment
output_note_2.commitment,
nullifier_1,
nullifier_2,
backward_link,
allow_chain,
);
verify_signature(
message,
signature,
signer_public_key
);
// Check chained transaction inputs are valid:
const backward_link_in_use = inputs.backward_link != 0;
const note1_propagated = inputs.backward_link == input_note_1.commitment;
const note2_propagated = inputs.backward_link == input_note_2.commitment;
if (backward_link_in_use) require(note1_propagated || note2_propagated);
if (is_defi_deposit) require(allow_chain != 1);
if (inputs.allow_chain == 1) require(output_note_1.owner == input_note_1.owner);
if (inputs.allow_chain == 2) require(output_note_2.owner == input_note_1.owner);
// Constrain unused public inputs to zero:
require(defi_root == 0);
// Set public inputs (simply listed here without syntax):
proof_id,
output_note_1_commitment,
output_note_2.commitment,
nullifier_1,
nullifier_2,
public_value,
public_owner,
public_asset_id,
old_data_tree_root,
tx_fee,
asset_id,
bridge_call_data,
defi_deposit_value,
defi_root,
backward_link,
allow_chain
Claim circuit
This circuit enables converting a claim note into two value notes, according to the defi interaction result.
Diagrams
Before the claim circuit
A defi interaction is a multi-step process which ends with the claim circuit being verified on-chain. There are more complete explanations of the whole process for many individual dApps on hackmd under the 'Aztec Connect' tag. Here's a very brief summary of the defi interaction process:
- A user wishes to interact with an Ethereum L1 dApp privately. They can use Aztec Connect to hide their identity from observers. The values they send will still be visible (but not traceable back to them). Let's use Uniswap as an example.
- The user wishes to convert 1 ETH to DAI tokens.
- They submit a 'defi deposit' of 1 ETH.
- A join-split proof is generated in 'defi deposit' mode, which spends the 1 ETH and creates a partial claim note (see the diagrams or the join-split markdown file).
- The rollup provider bundles (sums) the user's request to deposit 1 ETH into uniswap with the requests of any other users who wanted to deposit ETH to uniswap. User costs are amortised this way.
- The rollup provider is able to assign a
bridge_call_data
to each 'bundle', and with knowledge of thisbridge_call_data
and thetotal_input_value
being deposited, the rollup provider can 'complete' each user's partial claim note. I.e. the rollup provider creates a completed 'claim note' for each user. This claim note can be used later in the process to withdraw DAI via the claim circuit. - This bundled (summed) deposit of X ETH is sent to a 'Defi Bridge Contract' - a contract specifically designed to perform swaps between Aztec Connect users and Uniswap.
- The Defi Bridge Contract sends the
total_input_value = X
ETH to Uniswap (along with some parameters which we won't go into here), and receives back Y DAI. - The rollup contract emits an event which says "X ETH was swapped for Y DAI, and here is a 'defi interaction nonce' which represents this interaction".
- The rollup provider listens for this event to learn whether their interaction was successful, and to learn the new data:
total_output_value_a = Y
,defi_interaction_nonce = defi_interaction_nonce
. - The rollup provider is now in a position to submit a claim (using the claim circuit) on behalf of each user who partook in the defi interaction.
- Take note of this. Submission of a claim needn't be done by the user; no private key is required. The rollup provider is incentivised to generate a claim proof by being offered a fee via the earlier join-split proof.
Now we can get into the details of the claim circuit.
Claim circuit
At a high level, the claim circuit does the following:
- Spends a user's claim note;
- Refers to a particular defi interaction note (which contains uniquely-identifying details of a particular defi interaction);
- Outputs up-to two output 'value notes' whose values are proportional to the amount originally defi-deposited by this user.
output_note_1.value = ( user_input_amount / total_input_amount ) * total_output_amount_a
output_note_2.value = ( user_input_amount / total_input_amount ) * total_output_amount_b
(In our earlier example, ouput_note_1.value = ( 1 / X ) * Y
DAI).
Details
Inputs
Recall that all inner circuits must have the same number of public inputs as they will be used homogenously by the rollup circuit. Hence, some of the inputs to a claim circuit are unused and so set to 0.
Public Inputs
proof_id = ProofIds::DEFI_CLAIM
output_note_commitment_1
output_note_commitment_2
nullifier1
nullifier2
public_value = 0
public_owner = 0
asset_id = 0
data_root
claim_note.fee
claim_note_data.bridge_call_data_local.input_asset_id
claim_note.bridge_call_data
defi_deposit_value = 0
defi_root
backward_link = 0
allow_chain = 0
Private Inputs
claim_note_index
claim_note_path
-
claim_note: { deposit_value, bridge_call_data, // actually a public input defi_interaction_nonce, fee, // actually a public input value_note_partial_commitment, input_nullifier, }
defi_interaction_note_path
-
defi_interaction_note: { bridge_call_data, defi_interaction_nonce, total_input_value, total_output_value_a, total_output_value_b, defi_interaction_result, commitment, }
output_value_a
output_value_b
Circuit Logic (Pseudocode)
Note: for Pedersen commitments, different generators are used for different types of commitment.
Computed vars:
-
Extract data from the
claim_note.bridge_call_data
:-
bridge_call_data_local = { bridge_address_id, // represents a defi bridge contract address input_asset_id_a, input_asset_id_b, // if virtual, is the defi_interaction nonce from when a loan/LP position was opened output_asset_id_a, output_asset_id_b, // during some earlier defi interaction by the user bit_config, aux_data, }
-
-
The same data is also currently extracted from the
defi_interaction_note.bridge_call_data
. This is redundant, but we'll only need to remove these extra constraints if we ever approach the next power of 2. -
Extract config data from
bit_config
:-
bit_config = { second_input_in_use, second_output_in_use, }
-
-
claim_note.commitment = pedersen( pedersen(deposit_value, bridge_call_data, value_note_partial_commitment, input_nullifier), defi_interaction_nonce, fee, )
-
defi_interaction_note.commitment = pedersen( bridge_call_data, total_input_value, total_output_value_a, total_output_value_b, defi_interaction_nonce, defi_interaction_result, )
-
output_value_1 = defi_interaction_result ? output_value_a : claim_note.deposit_value
(give a refund if the interaction failed). -
output_asset_id_1 = defi_interaction_result ? output_asset_id_a : input_asset_id
-
output_value_2 = second_output_virtual ? output_value_a : output_value_b
- If the second output is virtual, its value must equal that of the first output.
-
output_asset_id_2 = second_output_virtual ? concat(1, defi_interaction_nonce) : output_asset_id_b
- If virtual, attach a record of this 'opening defi interaction nonce' to the note, via the asset_id field.
Checks:
- Many values are range-checked. See constants.hpp and constants.hpp for the variables whose bit-lengths are constrained.
- Check
bit_config
vars: - Extract
second_input_in_use
andsecond_output_in_use
fromclaim_note_data.bridge_call_data_local.config
// The below six constraints are exercised in bridge_call_data.hpp, see comments there for elaboration !(input_asset_id_b.is_zero) must_imply config.second_input_in_use
!(input_asset_id_b.is_zero) must_imply config.second_output_in_use
config.second_input_in_use must_imply input_asset_id_a != input_asset_id_b
config.second_output_in_use && both_outputs_real must_imply output_asset_id_a != output_asset_id_b
first_output_virtual must_imply output_asset_id_a == virtual_asset_id_placeholder
second_output_virtual must_imply output_asset_id_b == virtual_asset_id_placeholder
require(claim_note.deposit_value != 0)
require(deposit_value <= total_input_value)
require(output_value_a <= total_output_value_a)
require(output_value_b <= total_output_value_b)
require(claim_note.bridge_call_data == defi_interaction_note.bridge_call_data)
require(claim_note.defi_interaction_nonce == defi_interaction_note.defi_interaction_nonce)
- Check claim note exists in the data tree using
data_root, claim_note_path, claim_note_index, claim_note.commitment
. - Check defi interaction note exists in the data tree using
defi_root, defi_interaction_note_path, defi_interaction_nonce
.- Note: the leaf index of a defi interaction note is its
defi_interaction_nonce
. Thedefi_interaction_nonce
is derived in the rollup circuit at the time the defi deposit (join split) is processed.
- Note: the leaf index of a defi interaction note is its
Ratio Checks (very complex code):
- Ensure
output_value_a == (deposit / total_input_value) * total_output_value_a
, unlessoutput_value_a == 0 && total_output_value_a == 0
(i.e. unless no value was returned by the bridge contract for output_a). - Ensure
output_value_b == (deposit / total_input_value) * total_output_value_b
, unlessoutput_value_b == 0 && total_output_value_b == 0
(i.e. unless no value was returned by the bridge contract for output_b). - (Also prevent zero denominators
total_input_value
,total_output_value_a
, andtotal_output_value_b
).
Computed public inputs:
nullifier_1 = pedersen(claim_note.commitment)
nullifier_2 = pedersen(defi_interaction_note.commitment, claim_note.commitment)
output_note_commitment1 = pedersen(value_note_partial_commitment, output_value_1, output_asset_id_1, nullifier_1)
output_note_commitment2 = (second_output_virtual ^ second_output_real) ? pedersen(value_note_partial_commitment, output_value_2, output_asset_id_2, nullifier_2) : 0
Rollup circuit
Circuit Description
The rollup circuit aggregates proofs from a defined set of ‘inner’ circuits.
Each inner circuit has 16 public inputs. The rollup circuit will execute several defined subroutines on the public inputs.
Notation
We use the following definitions in this spec:
-
NUM_BRIDGE_CALLS_PER_BLOCK
-
NUM_ASSETS
-
NUM_FIELDS
(number of inner-circuit public inputs propagated by rollup circuit) - rollup size (i.e. number of transaction proofs in a single rollup)
Public Inputs: Detail
There are public inputs, in three sections:
- Rollup Proof Data: elements from that define the rollup block information (described below)
- Rolled-Up Transactions Data: Inner-circuit public inputs (a total of inputs; inputs per rolled up transaction)1
- Recursive Proof Data: elements from , represented as elements from , whose values are ; see here for explanation.
All are field elements. The first public inputs are the following:
rollup_id
rollup_size
data_start_index
old_data_root
new_data_root
old_null_root
new_null_root
old_data_roots_root
new_data_roots_root
old_defi_root = 0
new_defi_root
defi_bridge_call_datas
(size: )defi_bridge_deposits
(size: )asset_ids
(size: )total_tx_fees
(size: )public_inputs_hash
The public_inputs_hash
value is a SHA256 hash of the set of all join-split public inputs that will be broadcasted on-chain. These are:
proof_id
output_note_commitment_1
output_note_commitment_2
nullifier_1
nullifier_2
public_value
public_owner
public_asset_id
Private Inputs: Detail
The following inputs are private to reduce proof size:
- The recursive proof output of each inner proof (4 elements represented as 16 elements, see above)
- The remaining public inputs of each inner-circuit proof (see footnote 1)
old_data_path
linked_commitment_paths
linked_commitment_indices
new_null_roots
(except the latest one since that becomes a public input)old_null_paths
data_roots_paths
data_roots_indices
Index of Functions
Extract
Extraction Function extracts the public inputs from an inner proof, and validates the result matches the rollup’s inner public inputsAggregate
Proof Aggregation Function for ultimate batch verification outside the circuit, given a verification key and (optional, defined by 4th input parameter) a previous output of Aggregate. Returns a BN254 point pairNonMembershipUpdate
Nullifier Update Function checks a nullifier is not in a nullifier set given its root, then inserts the nullifier and validates the correctness of the associated merkle root updateBatchUpdate
Batch Update Function inserts a set of compressed note commitments into the note tree and validates the corretness of the associated merkle root update Update - inserts a single leaf into the root tree and validates the corretness of the associated merkle root updateProcessDefiDeposit
Processes Defi Deposit ensures that if a given inner proof is a defi deposit proof, it has a valid bridge call data that matches one of the input bridge call datas to the rollup. Further, it also adds thedefi_interaction_nonce
in the encrypted claim note of a defi deposit proof.ProcessClaim
Process Claims checks if the claim proof is using the correct defi root.
Circuit Logic (Pseudocode)
-
Let
Q_0 = [0, 0]
-
Validate
num_inputs == N
-
Let
previous_note_commitment_1 = 0; previous_note_commitment_2 = 0; previous_allow_chain = 0;
-
For
i = 1, ..., num_inputs
-
Let
pub_inputs = Extract(PI_i)
-
Let
vk = vks[proof_id_i]
-
Let
Q_i = Aggregate(PI_i, pub_inputs, vk, Q_{i-1}, (i > 1))
-
Let =
output_note_commitment_1_i
-
Let =
output_note_commitment_2_i
-
Validate
NonMembershipUpdate(
\text{null_root}{2i}, \text{null_root}{2i+1},nullifier_1_i)
-
Validate
NonMembershipUpdate(
\text{null_root}{2i + 1}, \text{null_root}{2i+2}, nullifier_2_i)
-
Validate
Membership(old_data_roots_root, data_roots_indices[i], data_roots_pths[i], data_tree_root_i)
-
If
pub_inputs.PROOF_ID = DEFI_DEPOSIT
thenProcessDefiDeposit
:- Check
pub_inputs.ASSET_ID
matches only one (sayk
th) bridge call data inbridge_call_datas
- Update
defi_bridge_deposits[k] += pub_inputs.PUBLIC_OUTPUT
- Update
encrypted_claim_note += (defi_interaction_nonce * rollup_id + k) * G_j
,k ⋹ 0, 1, 2, 3
- Check
-
Validate
ProcessClaim(pub_inputs, new_defi_root)
-
Let
chaining = propagated_input_index != 0
-
Let
propagating_previous_output_1 = backward_link == previous_note_commitment_1
-
Let
propagating_previous_output_2 = backward_link == previous_note_commitment_2
-
Let
previous_tx_linked = propagating_previous_output_1 || propagating_previous_output_2
-
Let
start_of_subchain = chaining && !previous_tx_linked
-
Let
middle_of_chain = chaining && previous_tx_linked
-
If
start_of_subchain
then:- Validate
Membership(old_data_root, linked_commitment_indices[i], linked_commitment_paths[i], backward_link)
- Validate
-
Let
propagating_previous_output_index = propagating_previous_output_1 ? 1 : propagating_previous_output_2 ? 2 : 0
- If
middle_of_chain
then:require(previous_allow_chain == propagating_previous_output_index, "not permitted to propagate this note")
- Set the inner proof value corresponding to the commitment being propagated to
0
. - Set the inner proof value corresponding to the nullifier of the commitment being propagated to
0
.
-
-
Validate
[P1, P2] = Q_{num_inputs}
-
Validate
BatchUpdate(old_data_root, new_data_root, data_start_index, leaf_1, ..., leaf_{2 * num_inputs})
-
Validate
old_null_root = null_root_1
-
Validate
new_null_root = null_root_{2 * num_inputs + 1}
A transaction proof (i.e. inner proof) contains a total of 16 public inputs but the rollup circuit _propagates_ only 8 of them as its public inputs. Those public inputs of the inner proof marked as ✅ are propagated:
✅ `proof_id`
✅ `output_note_1_commitment`
✅ `output_note_2_commitment`
✅ `input_note_1_nullifier`
✅ `input_note_2_nullifier`
✅ `public_value`
✅ `public_owner`
✅ `public_asset_id`
❌ `merkle_root`
❌ `tx_fee`
❌ `asset_id`
❌ `bridge_call_data`
❌ `defi_deposit_value`
❌ `defi_root`
❌ `backward_link`
❌ `allow_chain`
Root Rollup circuit
Circuit Description
This circuit rolls up other rollup proofs.
It is defined by a parameter rollup_num
, of inner rollups. Let's also denote rollup_num
for convenience.
Circuit Inputs: Summary
The inputs for the root rollup circuit are:
As previously, the field is from the BN254 specification.
Public Inputs
The root rollup circuit contains 17
public inputs.
The first pubic input is a SHA256 hash (reduced modulo the BN254 group order) of the following parameters:
rollup_id
(The location wherenew_root_M
will be inserted in the roots tree)rollup_size
data_start_index
old_data_root
new_data_root
old_null_root
new_null_root
old_root_root
new_root_root
old_defi_root
new_defi_root
bridge_call_datas
(size isNUM_BRIDGE_CALLS_PER_BLOCK
)defi_deposit_sums
(size isNUM_BRIDGE_CALLS_PER_BLOCK
)encrypted_defi_interaction_notes
(size isNUM_BRIDGE_CALLS_PER_BLOCK
)previous_defi_interaction_hash
rollup_benficiary
- For
- The
public_inputs_hash
of the rollup
- The
The remaining 16 public inputs are 68-bit limbs of two BN254 elements. Each element is split into two elements, which is in turn split into 4 68-bit limbs.
The two elements, , represent the recursive_proof_output
- group elements that must satisfy the following pairing check in order for the set of recursive proofs in the root rollup circuit to be valid:
, where is the element produced by the Ignition trusted setup ceremony.
Broadcasted Inputs
In addition to the public inputs, the preimage to the above SHA256 hash is also broadcasted with the proof.
The purpose of the SHA256 compression is not to hide information, it is solely to reduce the number of public inputs to the circuit.
This is because, for a verifier smart contract on the Ethereum blockchain network, the computational cost of processing a public input is ~160 gas. The computational cost of including a 32-byte value in a SHA256 hash is 6 gas. Therefore reducing the public inputs via SHA256 hashing represents a significant gas saving, lowering the cost of processing a rollup block.
The rollup_benficiary
is just added to the circuit to ensure the proof constructor can pay who they intend.
Private Inputs
- The recursive proof output of each inner rollup proof (4 elements represented as 16 elements, see above)
- The remaining public inputs of each rollup proof
Circuit Logic (Pseudocode)
- For , check that
- For , check that
new_data_root
=old_data_root
. - Validate
Update(old_data_roots_root, new_data_roots_root, rollup_id, new_data_root_M)
- Validate that the
new_defi_root
of each real inner rollup proof is equal to the inputnew_defi_root
to the root rollup - Validate that the
bridge_call_datas
in each real inner rollup proof match the inputbridge_call_datas
to the root rollup - Accumulate defi deposits across inner rollup proofs
- Add the input
defi_interaction_notes
in thedefi_tree
and computeprevious_defi_interaction_hash := Hash(defi_interaction_notes)
- Range constrain that
rollup_beneficiary
is an ethereum address,
where is the verification key of the rollup circuit.
Root Verifier Circuit
We use the notation of the Aztec Yellow Paper. In particular, is a curve defined over a finite field , is a prime on the order of , and is a subgroup of BN254 of order .
Circuit Description
This is a standard PLONK circuit that verifies a TurboPLONK root rollup proof. At the time the root verifier circuit is constructed, it is supplied a list of TurboPLONK verification keys, one for each root rollup circuit that is to be verifiable by . Let N_{vk} = # L_{vk} denote the number of root rollup shapes that are accepted by the root verifier circuit.
Circuit Inputs: Summary
The inputs for the root verifier circuit have the form
Public Inputs
The root verifier receives public inputs. The first public input is a mod- SHA256 hash of broadcast data. This is, in fact, the same datum that appears as a public input to the root rollup circuit. The next 16 public inputs encode the recursion output of the root verifier circuit. This is the data of two points of . Each point consists of two elements, which is in turn split into 4 68-bit limbs that are regarded as elements of .
Private Inputs
The root verifier has private inputs. Each verification key consists of 15 elements (11 corresponding to constraint selectors, and 4 corresponding to permutation selectors), each one contributing 8 limbs in , leading to a total of inputs. The remaining private inputs to the root verifier circuit are the 16 limbs in that make up the recursive proof output of the root rollup circuit.
Circuit Logic
Then, when verifying a root rollup circuit , a the verification key of is instantiated as a witness variable in the circuit , which imposes the constraint that lies in using a Pedersen hash-like compress
function. The remaining constraints defining this circuit are generated by the standard library's recursive verifier. These constraints are, roughly speaking, those described in the verifier's algorithm in the PLONK paper. More specifically, one should look at the VIP Edition of the paper, making minor changes to include a simplification proposed by Kev Wedderburn for smaller proof size (see IACR version 20210707:125953).
Rollup Contract
Rollup contract is responsible for processing Aztec zkRollups, relaying them to a verifier contract for validation and performing all relevant token transfers and defi bridge interactions.
High-Level Overview of Layer 2 Architecture
The specifics of the Layer 2 architecture are not explicitly in scope for the smart contract audit, as the rules/transaction semantics are defined via the logic in our ZK-SNARK cryptographic circuits, not the L1 smart contracts.
However, understanding the architecture may be useful to better understand the logic of the rollup processor smart contract, and the logic it executes when processing a rollup block.
State Model
L2 state is recorded in 5 append-only databases, represented as Merkle trees. The Rollup contract records the roots of each tree via the rollupStateHash variable.
A call to the processRollup(...) method is, at its core, a request to update the roots of the above Merkle trees due to changes in the underlying databases from a block of L2 transactions.
The main databases/Merkle trees are:
- dataTree contains UTXO notes that contain all created value notes and account notes
- defiTree contains the results of previous L1 contract interactions instigated from the rollup contract
- rootTree contains all past (and the present) Merkle roots of the dataTree. Used in L2 transactions to prove the existence of notes in the dataTree.
The dataTree and defiTree have with it associated a shared nullifier set. A nullifier set is an additional database which is also represented as a Merkle tree whose roots are included in rollupStateHash. This nullifier set can be shared because there is no risk of collisions.
Nullifier sets record all items that have been deleted from their linked database. The encryption algorithm used to encrypt nullifiers is different from the encryption used for their counterpart objects in their linked database. This gives us the property of unlinkability - observers cannot link note creation to note destruction, which obscures the transaction graph.
The rootTree has no linked nullifier set as it is not possible to delete members of rootTree.
L2 data structures
The following is a brief description of the data structures in the Aztec L2 architecture. See notes_and_nullifiers for a more complete descriptions.
Value notes are stored in the dataTree. They represent a discrete sum of ETH, ERC20 tokens or virtual assets held by a user.
Account notes are stored in the dataTree. They link a human-readable alias to both an account public key and to a spending public key. A user can have multiple account notes with multiple spending keys, but all must share the same alias and account key.
Note: Account keys are used to decrypt/view notes, spending keys are required to spend notes. The security requirements for the former are weaker than the latter, as spending keys are required to move user funds.
DeFi notes are stored in the defiTree. They represent a result of an L1 contract interaction instigated by the rollup processor contract. This type of note records the number of input/output assets from the interaction (as well as their asset types) and information about whether the corresponding interaction succeeded/failed.
Claim notes are stored in the dataTree. This type of note represents a claim on the future proceeds of an L1 contract interaction. Claim notes are created from value notes, and are converted back into value notes with the help of a defi note.
L2 high-level circuit architecture
The Aztec network utilizes the following ZK-SNARK circuits to describe and validate L2 transactions:
Single transaction circuits
Join-Split circuit Describes a single deposit/withdraw/spend/defiDeposit transaction. Proof is created by the user on their local hardware.
Account circuit Describes a single account transaction. Proof is created by the user on their local hardware.
Claim circuit Describes a single defiClaim transaction. Proof is created by the rollup provider since no secret information is required to create a proof. This is for convenience since in theory this proof could be created by a user locally. Proof creation is deferred to the rollup provider for better user UX.
Rollup circuits
There are 3 circuit types used in AztecConnect:
-
Inner rollup circuit verifies up to 28 single transaction proofs and performs required L2 state updates.
-
Root rollup circuit is referred to as a rollup circuit in the smart contract code/comments. This circuit verifies up to 28 inner rollup proofs.
-
Root verifier circuit verifies a single root rollup proof.
The inner rollup/root rollup design was introduced in order to enable better parallelism.
Knowledge of the existence of the root verifier circuit is likely beyond the scope of this audit. It is used to simplify the computations required by the smart contract PLONK verifier. All other circuits/proofs are created using the “Turbo PLONK” ZK-SNARK proving system.
Regular PLONK proofs are slower to construct but faster to verify compared to Turbo PLONK proofs. The root verifier circuit is made using regular PLONK, and it verifies the Turbo PLONK root rollup circuit. This reduces the computations (and gas costs) required to verify the proof on-chain.
Aztec uses recursive ZK-SNARK constructions to ensure that only the final ZK-SNARK proof in the transaction stack needs to be verified on-chain. If the root verifier proof is correct, one can prove inductively that all other proofs in the transaction stack are correct.
L2 transaction types
An Aztec rollup block contains up to 896 individual user transactions, which represent one of seven transaction types. Each transaction type is defined via a proofId variable attached to the transaction.
proofId | transaction type | description |
---|---|---|
0 | padding | An empty transaction - present when there are not enough user transactions to fill the block |
1 | deposit | Converts public L1 ETH/ERC20 tokens into value notes |
2 | withdraw | Converts value notes into public ETH/ERC20 tokens on L1 |
3 | spend | Private L2 transaction - converts value notes into different value notes |
4 | account | Creates a user account note |
5 | defiDeposit | Converts a value note into a claim note |
6 | defiClaim | Converts a claim note into a value note |
Anatomy of an L2 transaction
Each user transaction in the rollup block will have 8 uint256
variables associated with it, present in the transaction calldata when processRollup(...)
is called.
While represented as a uint256
in the smart contract, these variables are big integers taken modulo the BN254 elliptic curve group order.
This is verified in StandardVerifier.sol.
Not all fields are used by all transaction types.
publicInput | name | description |
---|---|---|
0 | proofId | Defines the transaction type (checked in the rollup ZK-SNARK) |
1 | noteCommitment1 | The 1st note created by the transaction (if applicable) |
2 | noteCommitment2 | The 2nd note created by the transaction (if applicable) |
3 | nullifier1 | The 1st nullifier for any notes destroyed by the transaction (if applicable) |
4 | nullifier2 | The 2nd nullifier for any notes destroyed by the transaction (if applicable) |
5 | publicValue | Amount being deposited/withdrawn (if applicable) |
6 | publicOwner | Ethereum address of a user depositing/withdrawing funds (if applicable) |
7 | assetId | 30-bit variable that represents the asset being deposited/withdrawn (if applicable) |
As not all fields are used by all transaction types, a custom encoding algorithm is used to reduce the calldata payload of these transactions. Transactions are decoded in Decoder.sol.
Data included in a rollup transaction
When the processRollup(...)
function is called, the input variable bytes calldata encodedProofData
contains the core information required to validate and process an Aztec rollup block.
Due to significant gas inefficiencies in the Solidity ABI decoding logic, custom encoding is used and the overall data structure is wrapped in a bytes variable.
The proofData can be split into 3 key components:
- Rollup header - a fixed-size block of data that records the key properties of the rollup block.
- Transaction data - a variable-size block that records the encoded user transaction data
- PLONK proof - fixed-size block of data that contains a PLONK ZK-SNARK validity proof that proves the L2 transaction logic has been correctly followed.
Rollup Header Structure
byte range | num bytes | name | description |
---|---|---|---|
0x00 - 0x20 | 32 | rollupId | Unique rollup block identifier. Equivalent to block number |
0x20 - 0x40 | 32 | rollupSize | Max number of transactions in the block |
0x40 - 0x60 | 32 | dataStartIndex | Position of the next empty slot in the Aztec dataTree |
0x60 - 0x80 | 32 | oldDataRoot | Root of the dataTree prior to rollup block’s state updates |
0x80 - 0xa0 | 32 | newDataRoot | Root of the dataTree after rollup block’s state updates |
0xa0 - 0xc0 | 32 | oldNullRoot | Root of the nullifier tree prior to rollup block’s state updates |
0xc0 - 0xe0 | 32 | newNullRoot | Root of the nullifier tree after rollup block’s state updates |
0xe0 - 0x100 | 32 | oldDataRootsRoot | Root of the tree of dataTree roots prior to rollup block’s state updates |
0x100 - 0x120 | 32 | newDataRootsRoot | Root of the tree of dataTree roots after rollup block’s state updates |
0x120 - 0x140 | 32 | oldDefiRoot | Root of the defiTree prior to rollup block’s state updates |
0x140 - 0x160 | 32 | newDefiRoot | Root of the defiTree after rollup block’s state updates |
0x160 - 0x560 | 1024 | bridgeCallDatas[NUMBER_OF_BRIDGE_CALLS] | Size-32 array of bridgeCallDatas for bridges being called in this block. If bridgeCallData == 0, no bridge is called. |
0x560 - 0x960 | 1024 | depositSums[NUMBER_OF_BRIDGE_CALLS] | Size-32 array of deposit values being sent for bridges being called in this block |
0x960 - 0xb60 | 512 | assetIds[NUMBER_OF_ASSETS] | Size-16 array of the assetIds for assets being deposited/withdrawn/used to pay fees in this block |
0xb60 - 0xd60 | 512 | txFees[NUMBER_OF_ASSETS] | Size-16 array of transaction fees paid to the rollup beneficiary, denominated in each assetId |
0xd60 - 0x1160 | 1024 | interactionNotes[NUMBER_OF_BRIDGE_CALLS] | Size-32 array of defi interaction result commitments that must be inserted into the defiTree at this rollup block |
0x1160 - 0x1180 | 32 | prevDefiInteractionHash | A SHA256 hash of the data used to create each interaction result commitment. Used to validate correctness of interactionNotes |
0x1180 - 0x11a0 | 32 | rollupBeneficiary | The address that the fees from this rollup block should be sent to. Prevents a rollup proof being taken from the transaction pool and having its fees redirected |
0x11a0 - 0x11c0 | 32 | numRollupTxs | Number of “inner rollup” proofs used to create the block proof. “inner rollup” circuits process 3-28 user txns, the outer rollup circuit processes 1-28 inner rollup proofs. |
N.B. our documentation will sometimes refer to a “note” as a “commitment” (they are effectively synonyms in our architecture).
Security properties of Aztec
The tokens/ETH in every un-spent value note in the dataTree must be fully collateralised on-chain. That is, the RollupProcessor.sol contract must own enough ERC20 tokens/ETH to cover the value represented in all of its un-spent notes.
Consequently, whenever a user creates a deposit transaction, they must have previously transferred/approved an equivalent amount of ETH/tokens to RollupProcessor.sol.
It should also not be possible for an attacker to create value notes that are linked to ETH/tokens deposited by a different user without their express permission.
More generally it is essential that front-running attacks are not possible. Front-running attacks are attacks where an attacker takes a transaction out of the transaction pool and manipulates it to re-route value to/from an account not intended by the original transaction sender.
Value can also be deposited to the system via defi interactions. When claim notes are converted into value notes, an equivalent amount of ETH/tokens must have been deposited into the bridge by a defi interaction (described in the next section).
When value is extracted from RollupProcessor.sol, an equivalent amount of value recorded in value notes must have been destroyed.
Assuming the cryptography is correct, this means that in processRollup(...)
’s call-data, there must be a withdraw transaction whose value field matches the amount being withdrawn.
Alternatively, value can be extracted if the rollup header contains a non-zero value inside the depositSums
array (this implies that value notes have been converted into claim notes and we are instructing the rollup to send tokens to a specified bridge contract).
Anatomy of an Aztec Connect defi transaction
An outbound defi interaction is described by an instance of a FullBridgeCallData
and a depositSum
(present in the rollup header in the bridgeCallDatas
and depositSums
arrays).
An instance of the struct uniquely defines the expected inputs/outputs of a defi interaction.
Before being unpacked to the aforementioned struct the values (other than bridgeGasLimit
and bridgeAddress
) are being encoded in a unit256
bit-string containing multiple fields.
When unpacked, its data is used to create the FullBridgeCallData
struct:
struct FullBridgeCallData {
uint256 bridgeAddressId;
address bridgeAddress;
uint256 inputAssetIdA;
uint256 inputAssetIdB;
uint256 outputAssetIdA;
uint256 outputAssetIdB;
uint256 auxData;
bool firstInputVirtual;
bool secondInputVirtual;
bool firstOutputVirtual;
bool secondOutputVirtual;
bool secondInputInUse;
bool secondOutputInUse;
uint256 bridgeGasLimit;
}
For specific encoding/decoding logic see comments in RollupProcessor.sol.
A bridge contract is an L1 smart contract that translates the interface of a generic smart contract into the Aztec Connect interface.
Interactions are modelled as synchronous or asynchronous token transfers. Input assets are sent to a bridge contract and up to two different output assets are returned. The exchange rate between the input/output assets is assumed to be unknown until the transaction is mined.
Input/output assets can be either “real” or “virtual”. A “real” token has an underlying ERC20 smart contract (or is ETH). A “virtual” token exists entirely inside the Aztec network, with no L1 counterpart. It is used to efficiently track synthetic values (such as the amount of outstanding value in a loan, or votes in a DAO).
RollupProcessor enforces that _totalInputValue
is non-zero.
If both input assets are used, _totalInputValue
amount of both input assets is transferred to the bridge before a bridge is called.
BOTH output assets could be virtual but since their assetId
is currently assigned as an interaction nonce of a given interaction it would simply mean that more of the same virtual asset is minted.
DeFi Transaction Flow
If a rollup block contains DeFi interactions a processBridgeCalls(...)
function is called.
In the function, the following occurs:
- All outbound defi interactions in the rollup block are iterated over. For each interaction:
- Input tokens are transferred to the specified bridge contract
- The bridge contract has to return 3 parameters:
uint256 outputValueA
,uint256 outputValueB
,bool isAsync
- When some of the output assets is an
ERC20
token and the corresponding output value is non-zero, the contract attempts to recover the tokens via callingtransferFrom(...)
. If the asset is ETH, bridge transfers it in to the RollupProcessor and RollupProcessor validates it has received a correctly-sized ETH payment. This payment is linked to the defi interaction through_interactionNonce
. - A
defiInteractionResult
object is constructed based on the results of the above.
The logic for processing a single defi transaction is wrapped in a DefiBridgeProxy smart contract.
This smart contract is called from the RollupProcessor via delegateCall(...)
.
The purpose of this is to enable the call stack to be partially unwound if any step of the defi interaction fails.
E.g. consider a defi interaction where 10 ETH is sent to the and the expected return asset is DAI. If the defi bridge contract reverts, we want to recover the 10 ETH that was sent to the contract, without causing the entire rollup block to revert (which would enable griefing attacks). Similarly imagine we send 10 ETH to a bridge, which claims its outputValueA is 100 DAI. If a call to
DAI.transferFrom(...)
fails, we want to unwind the call stack such that 10 ETH never left RollupProcessor.
If the DefiBridgeProxy call fails, we record this in the defiInteractionResult
.
This allows for a future defiClaim transaction to convert any linked claim notes back into value notes.
This effectively returns the value (less the fee) to the user.
The expected interface for defi bridges is defined in IDefiBridge.
Encoding and Decoding of Proof Data
For info about proof data encoding check out documentation of Decoder contract.