-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposer Forwarder Contract #47
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for putting this together @just-mitch! Left a few comments.
The work loop will wait for a configurable amount of time (e.g. 6 seconds) into each L1 slot. This will be exposed as an environment variable `sequencer.l1TxSubmissionDeadline`. | ||
|
||
If there are any queued requests, it will send them to the forwarder contract, and flush the `queuedRequests` list. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a fan of this approach. It'll either cause unnecessary delays (and we're very pressed for time during block building), or miss out on batching some txs. I'd prefer if the sequencer explicitly calls a flush
method at the end of its loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea! It complicates the L1TxManager
a little, since it might flush multiple transactions in the same L1 slot, so each flushed batch needs to explicitly wait on the batch before it.
For example, if there are no pending transactions on the first turn of the work loop, but it votes (and flushes), and in the next turn it has a transaction.
But that's probably a good thing to do now anyway, since it may have happened for whatever reason that the first transaction took slightly longer than 12 seconds to return, or in general that the L1 node still considered our address "in use".
|
||
### L1PublishBlockStats | ||
|
||
Since one `L1PublishBlockStats` can be used for multiple actions, its constituent `event` will be changed to `actions`, which will be a `string[]`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The stats
will die soon, in favor of just using metrics.
|
||
### Setup | ||
|
||
There will be an optional environment variable `sequencer.forwarderContractAddress` that will be used to specify the forwarder contract address. To improve UX, there will be a separate environment variable `sequencer.deployForwarderContract` that will default to `true` and be used to specify whether the forwarder contract should be deployed. If so, the Aztec Labs implementation of the forwarder contract will be deployed and used by the sequencer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should use the universal deterministic deployer here, so we can derive the forwarder contract address from the sequencer's address. This means the user wouldn't need to configure anything new: by default, we'd compute what would be the resulting forwarder address, check if there's code there, deploy if not, and use it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, but I think the user should be able to specify their own forwarder implementation. Maybe we should rename this optional environment variable to sequencer.customForwarderContractAddress
, and if it is not set then we automatically deploy one for them (using UDD based on their SEQ_PUBLISHER_PRIVATE_KEY).
- propose an l2 block | ||
- cast a governance proposal vote | ||
- cast a slashing vote | ||
- claim an epoch proof quote |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have we considered not bundling votes with block proposals? If we kept at least two different EOAs, we could push the two txs in parallel, with the voting one being "lighter" and easier to bump/replace (since it's blobless).
I guess this depends on whether votes need to happen strictly during the proposer slot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the vote needs to come from the proposer address within its slot.
We could have a validator specify two addresses, one that will submit blocks, and another that will do everything else.
But if we do the pooled EOAs to abandon blob transactions, I still think we'd want the forwarder contract as a layer of indirection, otherwise on L1 we'd need to walk through all the approved EOAs.
I think its simpler to keep the single proposer address that does everything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But if we do the pooled EOAs to abandon blob transactions, I still think we'd want the forwarder contract as a layer of indirection, otherwise on L1 we'd need to walk through all the approved EOAs.
I was thinking about just using the forwarder contract address as "the" validator's address in the staking registry. But I didn't consider that we're requiring off-chain signatures for attestations, so we cannot use the forwarder contract as it doesn't have keys associated to it (damn lack of AA).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But I didn't consider that we're requiring off-chain signatures for attestations, so we cannot use the forwarder contract as it doesn't have keys associated to it
Attestor key != proposer key, so they can be separate for that part of it.
When we spoke around 2 address, is was that a proposer could specify what key it would like to use for voting at the gov such that the gov could do proposer => voter
map and deal with it that way. Were just when we also had the other cases and some of those doXandYandZ
functions that can also be fixed that forwarder looks nice.
L1 publisher will be broken into two classes: | ||
|
||
- within `@aztec/sequencer-client`, there will be a `L1TxManager` | ||
- within `@aztec/prover-node`, there will be a `L1TxPublisher` | ||
|
||
Under the hood, both of these will use the `L1TxUtils` to create and send L1 transactions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that the L1Publisher
is being used also for querying (we were lazy, yeah), which should not be among its responsibilities. I added a wrapper around the Rollup contract for querying some info at yarn-project/ethereum/src/contracts/rollup.ts
, maybe we can move all getters there?
|
||
## Introduction | ||
|
||
Within the same L1 transaction, one cannot make blob transactions and regular transactions from the same address. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something off here with the wording of "same l1 transactions" and then "multiple transactions", should it be block and not transaction at first. e.g., within the same L1 block, one cannot make blob transactions and regular transactions from the same address.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, sorry, bad typo. Should read "Within the same L1 block..."
|
||
However, aztec node operators must be able to do things like: | ||
|
||
- propose and l2 block |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- propose and l2 block | |
- propose an l2 block |
|
||
- Allow the sequencer client to take multiple actions in the same L1 transaction | ||
- No changes to governance/staking | ||
- Under 50 gas overhead per L2 transaction when operating at 10TPS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure how this one is chosen? (The gas overhead)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was finger in the air, but I think this layer of indirection shouldn't eat more than 10% of our 500 gas target.
} | ||
``` | ||
|
||
Note: this requires all the actions to succeed, so the sender must be sure that, e.g. a failed governance vote will not prevent the L2 block from being proposed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not clear if desirable, but you could provide a list of "shouldAllowFailure" if you want to support having failures. Mostly I would expect that the transactions bundled would either all fail or none, so don't think I would add it, just that it is an option if you find a case where it makes sense to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about this as well- I landed on that the node should just never send actions that it expects to fail, so I don't think we want it at the moment.
|
||
```typescript | ||
interface L1TxManager { | ||
addRequest(request: L1TxRequest): void; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about a "clear"? Based on time of execution, it might not actually make sense for you to send it so better to drop then send a tx you know will fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clear
gets a little messy if the onus is on the sequencer I think.
How about instead the addRequest
has a validThroughL2Slot
parameter, and when we call sendRequests
it filters things things that have expired?
|
||
### Setup | ||
|
||
There will be an optional environment variable `sequencer.custForwarderContractAddress` that can be used to specify a custom forwarder contract address. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason that custom
is shorthanded to cust
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo. Thank you.
|
||
### Gas | ||
|
||
Once the L2 block body is removed from calldata, the "static" arguments to call the propose function should be under 1KB. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know you say static here, but signatures still are being passed around.
65 * (128 * 2 / 3) + 128 / 3 * 32
~7kb.
Also,
|
||
Operating at 10TPS, this would mean an overhead of under 3 gas per L2 transaction. | ||
|
||
Unfortunately, the current design to support forced inclusion requires a hash for each transaction in the proposal args. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My alzheimers is kicking in, but is this actually true? As I recall it, the "hash of each transaction" is ONE hash of all of them, e.g., txs_hash
being a commitment to all the transactions in the block, e.g., 32 bytes for the full block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah you're right! Strange- do you know then why bytes32[] txHashes;
is part of ProposeArgs
? It doesn't look like it is used at all?
Edit: ah of course, currently needed for computing the proposal digest to check signatures. But obviously no reason we couldn't turn this into a commitment, as you say.
Add a forwarder contract that allows the sequencer client to take multiple actions in the same L1 transaction.
Adjust the sequencer client to batch its actions into a single L1 transaction.