The Mempool Problem
Before any transaction makes it into a block on a blockchain like Ethereum, it sits in a waiting room called the mempool (short for memory pool). Every node in the network can see every pending transaction in this room. The order of transactions in a block has enormous economic consequences, and because validators (the entities that produce blocks) choose that order, they can profit from arranging things in their favor.
This is the root of the problem that BTX is designed to fix. But to understand the fix, we need to understand the threat.
Think of the mempool as a public noticeboard. You write your intentions on it for everyone to see, before they are acted upon. Anyone watching can rearrange, jump ahead of you, or exploit the information before your transaction is finalized. That public visibility is the core vulnerability.
What is MEV?
Maximal Extractable Value (MEV) refers to the profit that a block producer can extract by reordering, inserting, or censoring transactions within the blocks they produce. The term was originally coined as "Miner Extractable Value" during proof-of-work days, but the mechanism generalizes to any context where a single party controls the ordering of transactions.
MEV takes many concrete forms. Front-running occurs when a validator or a searcher sees a large pending trade and places their own transaction ahead of it to profit from the price movement it will cause. Sandwich attacks involve wrapping a target transaction between two attacker transactions to extract value from both sides of the slippage. Arbitrage across decentralized exchanges is another common form, not inherently malicious, but still enabled by mempool visibility.
Research has documented hundreds of millions of dollars extracted via MEV annually on Ethereum alone. The problem is systemic, not incidental. It disadvantages ordinary users and corrodes trust in decentralized systems.
The fundamental fix is conceptually simple: if validators cannot read the content of transactions before they finalize a block, the information advantage disappears. This is where encrypted mempools come in.
Encrypted Mempools
An encrypted mempool is a design where users encrypt their transactions before broadcasting them. The transactions stay encrypted until they are included in a block. Only at that point does decryption happen, in a way that cannot be controlled by any single party.
The decryption must be distributed and threshold-based: no single validator should hold the decryption key, because that would just create a new single point of failure and a new target for bribes or coercion. Instead, a committee of validators collectively holds key material, and decryption requires cooperation from a minimum number of them, say, any 10 out of 20 validators.
This committee-based decryption is called threshold decryption. The encrypted mempool idea sounds straightforward in principle, but making it efficient enough to run in a live blockchain with real latency budgets is the hard part. This is where the cryptography gets interesting.
Design a system where: (1) users encrypt transactions without any coordinator, (2) the committee collectively decrypts only the transactions included in each block, (3) all other pending transactions remain private, and (4) the whole process is fast enough to fit inside a block time.
Batch Threshold Encryption (BTE): The Right Primitive
A naive approach to threshold decryption has a fatal scaling problem. If each of N validators must send a decryption share for each of B transactions in a block, the total communication is O(N x B). For a block with 500 transactions and 100 validators, that is 50,000 messages per block. At current Ethereum block times, this is completely impractical.
Batch Threshold Encryption (BTE), introduced by Choudhuri et al. in 2024, is the primitive designed to escape this scaling trap. Its defining property is that the total communication required for a committee of N servers to decrypt a batch of B ciphertexts grows sublinearly per server in the batch size, and is independent of the total pool size.
In the best BTE constructions, each server sends a single constant-size message to a combiner regardless of how many transactions are in the batch. The combiner does the heavy lifting of reconstructing the batch decryption key and decrypting all transactions. Total communication across all N servers is O(N), not O(N x B).
BTE achieves this by allowing the batch to share decryption work. Instead of decrypting each transaction independently, the scheme computes one aggregate "batch secret key" that unlocks all selected transactions at once. The efficiency comes from this aggregation.
What Makes a BTE Scheme Good?
A well-designed BTE scheme must satisfy several properties simultaneously. This is harder than it sounds because these properties conflict with each other in subtle ways.
Epochless
Ciphertexts should not be tied to a specific block or decryption session. If a transaction is not included in the block it was encrypted for, it should be able to roll over to the next without re-encryption.
Collision-Free
Users should be able to encrypt without choosing an index or identifier that could collide with another user's choice. Collisions create coordination overhead and censorship surface.
Efficient
Decryption must be fast enough to fit in a block's time budget. This means quasi-linear (O(B log B)) or better, not quadratic, in the batch size.
Small Ciphertexts
Every pending transaction is a ciphertext sitting in the mempool. Ciphertext size directly affects mempool bandwidth, storage, and propagation latency across the network.
The Prior Landscape: Why Nothing Worked Perfectly
Several prior schemes attempted to build practical BTE for encrypted mempools. Each one solved some subset of the problem but left at least one critical flaw unaddressed. Understanding these failures is essential context for appreciating why BTX is a genuine advance.
Epoch-Bound Schemes
Early approaches like Shutter and Fairblock used threshold identity-based encryption (IBE), where each epoch (block) gets its own decryption key released by validators. This decrypts all ciphertexts for that epoch at once. The fatal flaw is the all-or-nothing property: releasing an epoch key exposes every pending transaction for that epoch, including those not included in the block. Privacy for non-included transactions is completely lost.
Choudhuri et al.'s original BTE construction fixed this by decrypting only the chosen subset. But their scheme requires a fresh multi-party computation setup for every block, which is enormously expensive in practice.
Session-Bound (Epochless but Index-Required) Schemes
BEAT-MEV introduced the epochless approach, removing the per-block setup requirement. However, it required each ciphertext to carry a user-chosen index from a small namespace. The constraint: each decryption session can decrypt at most one ciphertext per index. This creates two problems. First, users must coordinate or race to avoid choosing the same index (a coordination problem). Second, a validator who knows the indices can selectively stall transactions by ensuring their chosen indices are always occupied (a censorship attack).
Follow-up works like BEAT++ and BEAST-MEV improved on BEAT-MEV's computation cost and setup assumptions but retained the indexed design and its collision vulnerabilities.
Collision-Free but Expensive Schemes
Two schemes solved both the epoch-binding and collision problems. Fernando et al.'s TrX scheme achieved collision-free epochless BTE but required a Common Reference String (CRS) whose size grows linearly with the number of decryption sessions. Over the lifetime of a blockchain, this becomes an unbounded storage requirement, limiting long-lived deployments.
Boneh et al.'s Partial Fraction Evaluation (PFE) scheme also achieved collision-free epochless BTE without the growing CRS. But its concrete computation cost is approximately 2x slower than BTX in practice, and its opening phase requires 4 pairings per ciphertext slot where BTX requires only 1.
| Scheme | Collision-Free | Epochless | Decryption Cost | Ciphertext Size |
|---|---|---|---|---|
| BEAT-MEV | No | Yes | O(B²) pairings | 3|G1| + |GT| |
| BEAT++ (Agarwal et al.) | No | Yes | O(B_max log B_max) | 2|G1| + |GT| |
| Fernando et al. (TrX) | Yes | Yes | O(B log² B) | 2|G1| + |GT| (growing CRS) |
| Boneh et al. (PFE) | Yes | Yes | O(B_max log B_max)* | 2|G1| + |GT| |
| BTX | Yes | Yes | O(B log B) | |G1| + |GT| (smallest) |
Note: B is the actual batch size. B_max is the maximum supported batch size, a protocol parameter that is always greater than or equal to B. Schemes that depend on B_max rather than B waste computation when actual batches are smaller than the maximum.
Introducing BTX
BTX is a BTE construction scheme that achieves all four desiderata simultaneously: epochless, collision-free, quasi-linear in the actual batch size (not a fixed maximum), and with the shortest ciphertext of any known BTE scheme.
The paper's authors describe the approach as "conceptually much simpler" than Boneh et al.'s PFE, and this simplicity is a genuine feature. Simpler constructions are easier to analyze, implement, audit, and optimize. The key ideas are:
-
Punctured Powers-of-τ CRS
The public parameters include all powers of a secret scalar τ in the pairing groups, except one deliberately omitted "middle" power. This omission is the structural trick that enables encryption without an index and decryption without exposing τ.
-
Additive ElGamal Encryption
Each ciphertext is a standard additive ElGamal encryption using the missing power as the encryption key. No batch index is needed. Ciphertexts are as compact as ordinary ElGamal ciphertexts.
-
Middle-Product Decomposition
The expensive cross-term cancellation step in batch decryption, which naively costs O(B²) pairings, is reframed as a polynomial middle-product computation. This structure admits FFT acceleration, bringing the cost down to O(B log B).
-
Shamir Secret Sharing for Threshold
The threshold extension is clean and immediate. Because the batch secret key is a linear function of the underlying secret powers, Shamir-sharing those powers means each server contributes a single group element to the combiner, independent of batch size.
The Core Algebra: How It Actually Works
This section walks through the construction in increasing depth. Beginners can follow the high-level narrative. Protocol researchers can dig and have fun with the math.
Background: Bilinear Pairings
BTX operates over a pairing-friendly elliptic curve, specifically BLS12-381 in the implementation. A pairing-friendly curve supports three groups G1, G2, and GT, together with a bilinear map (the pairing) that takes one element from G1 and one from G2 and produces an element in GT.
The critical property is bilinearity: pairing([[a]]1, [[b]]2) = [[ab]]T. This means multiplication in the exponent can be computed without knowing the exponents themselves, as long as you have the group elements. This behaviour is what makes the entire construction possible.
We write [[x]]1 to mean "x times the generator of G1", i.e., the group element encoding the scalar x. The pairing of [[x]]1 and [[y]]2 gives [[xy]]T. You can add group elements in the same group, but you cannot extract x from [[x]]1 without solving the discrete logarithm problem, which is computationally infeasible.
Setup: The Punctured CRS
Key generation samples a random secret scalar τ and publishes:
The decryption key is a list of 2·B_max group elements in G2, covering all powers of τ from 1 to 2·B_max except the middle power B_max + 1. That missing power, published only as a GT element via the pairing, is the encryption key. Nobody can compute [[τ^(B_max+1)]]2 from the published CRS without knowing τ. This is the foundation of the security.
Encryption: Indexless and Simple
To encrypt a message m (a GT element), the sender picks a random scalar r and computes:
ct₂ = m + r · ek = m + [[r · τ^(B_max+1)]]_T (a GT element)
ciphertext = (ct₁, ct₂, π)
This is additive ElGamal encryption. The encryption pad is r multiplied by the encryption key, which equals [[r · τ^(B_max+1)]]T. The sender also attaches a non-interactive zero-knowledge proof π (a Schnorr proof via Fiat-Shamir) proving knowledge of r without revealing it. This proof is required for CCA security (explained later) and robustness against malformed ciphertexts.
Note that since no index is chosen, no epoch is referenced, snd no coordination is required. Encryption is fully independent.
Batch Decryption: The Two-Phase Protocol
Given an ordered batch (ct₁, ..., ctB) with B ≤ B_max, decryption proceeds in two phases.
Phase 1: Computing the Batch Secret Key σ
The server (or, in the threshold setting, each server's share) computes:
This is a single multi-scalar multiplication (MSM) of size B in G1. It compactly encodes the contribution of the entire batch to the decryption process. The key observation: each ciphertext is multiplied by a different power of τ, so their contributions can be separated in the next phase.
Phase 2: Opening Individual Slots
To recover message at slot ℓ, the server pairs σ with the CRS element [[τ^(B_max - ℓ + 1)]]2:
This sum contains the desired encryption pad [[r_ℓ · τ^(B_max+1)]]T (when k = ℓ). But it also contains unwanted cross-terms from every other ciphertext in the batch. The exponent k + B_max - ℓ + 1 equals B_max + 1 only when k = ℓ, so all other terms are different and must be subtracted.
These cross-terms β_ℓ are publicly computable from the ciphertexts and the CRS:
The omitted middle power ensures that the CRS element h_{B_max - ℓ + i + 1} is available for all i ≠ ℓ (the exponent would only hit B_max+1 when i = ℓ, which is excluded from the sum). Subtracting gives exactly the encryption pad:
m̂_ℓ = ct_{ℓ,2} - (α_ℓ - β_ℓ) ✓
Computing all β_ℓ values naively requires B(B-1) pairings: for each of B slots, sum B-1 pairing terms. This is O(B²), which is completely impractical at large batch sizes. This is the problem that the FFT optimization solves.
FFT Acceleration: From O(B²) to O(B log B)
The O(B²) cost is the only remaining obstacle. The key insight from the paper is a structural observation about the cross-terms: the vector (β₁, ..., βB) is a contiguous coefficient window of a bilinear polynomial product. This is precisely the pattern known as a middle product in polynomial arithmetic, studied by Hanrot, Quercia, and Zimmermann in the context of power series division.
The Middle-Product Structure
Define a G1-valued polynomial from the ciphertext batch:
Define a fixed G2-valued polynomial from the CRS (with a zero at the missing middle position):
where q_u = [[τ^(2B_max - u)]]₂ if 2B_max - u ≠ B_max + 1 0 if 2B_max - u = B_max + 1
The coefficient of X^(B_max + ℓ - 2) in the product A(X) ◦ Q(X) is exactly β_ℓ. The vector (β₁, ..., βB) is a contiguous block of B coefficients in the product polynomial, starting at degree B_max - 1. This is the classical middle product: compute the full polynomial product and extract the relevant window.
Why FFT Works Here
Since Q(X) is fixed (it depends only on the CRS, not on the ciphertexts), its Fourier representation can be precomputed once and reused across all batches. This is a critical engineering observation. The online cost per batch is:
-
Forward FFT of A(X) in G1
One FFT of size m = 2B over the G1-valued ciphertext polynomial. This costs O(B log B) G1 scalar multiplications.
-
Pointwise Pairings
Compute m = 2B pairings in the evaluation domain between the FFT of A and the precomputed FFT of Q. This gives the FFT of the product polynomial in GT.
-
Inverse FFT in GT
Apply an inverse FFT in GT to recover the product coefficients. GT arithmetic is 3 to 5 times more expensive than G1 arithmetic, but the cost is still O(B log B).
-
Read Off the Window
Extract the B relevant coefficients from the product polynomial. These are the β_ℓ values. Done.
The result is O(B log B) group exponentiations and O(B) pairings for the entire cross-term computation, down from O(B²) pairings naively. For B = 512, this is the difference between ~262,000 pairings and ~512, a reduction of more than three orders of magnitude in the dominant cost.
Phase Decomposition and Parallelism
A clean consequence of the middle-product structure is a natural parallelism between the precompute step and the threshold protocol. The β values depend only on the ciphertext batch and the public CRS, not on any secret material. This means the precompute step can begin as soon as the batch is fixed, running concurrently with the network round-trip for validator shares. In encrypted mempool deployments where the threshold protocol is dominated by network latency, this overlap is significant.
Because precompute and threshold decryption run in parallel, the observable latency is dominated by whichever finishes last, not their sum. In practice, network round-trips for validator shares often dominate, meaning the FFT computation is effectively free in the critical path.
Karatsuba Alternative
The paper also presents a Karatsuba-style recursive middle-product algorithm that achieves O(B^1.585) operations without requiring a GT-domain FFT. Since GT arithmetic is significantly more expensive than G1 or G2 arithmetic, the Karatsuba approach can be competitive at moderate batch sizes where the GT FFT's constant factor hurts. Both algorithms are described and implemented.
The Threshold Extension
The extension from a single-server scheme to a threshold scheme with N servers and reconstruction threshold t+1 is one of the most satisfying parts of the BTX construction. It follows almost immediately from the algebra.
Key Generation
The setup is identical to the single-server case: sample τ, publish the punctured CRS. The difference is in what gets shared. Instead of giving one server the full secret τ, the key generation algorithm Shamir-shares each power of τ independently:
Server j receives: sk_j = ( <τ>_j, <τ²>_j, ..., <τ^(B_max)>_j )
Each server holds B_max Shamir shares, one per power of τ. The decryption key dk is augmented with public commitments to each server's shares, [[<τ^i>_j]]₂, enabling share verification without revealing the shares.
Threshold Decryption
The key observation making the threshold extension efficient is that the batch secret key σ is a linear function of (τ, τ², ..., τ^(B_max)) for any fixed batch of ciphertexts. Linearity means Shamir sharing composes perfectly with the batch key computation. Each server j can compute its share of σ locally:
Each server broadcasts a single G1 element σ_j to the combiner. Communication per server is O(1), completely independent of B. The combiner collects shares from any t+1 servers and applies Lagrange interpolation in the exponent to reconstruct σ:
From this point, everything proceeds exactly as in the single-server case: compute α and β, subtract, unmask. The threshold extension adds exactly one additional step (interpolation) and zero additional communication rounds relative to the single-server case.
Share Verification
A malicious server could submit a garbage share σ_j to disrupt decryption of honest transactions. BTX prevents this via a batched pairing check. The combiner verifies each server j's share using:
where pk^ℓ_j = [[<τ^ℓ>_j]]₂ is the published commitment to server j's share of τ^ℓ. This check verifies the entire share in one batched multi-pairing operation, costing O(B) pairings and essentially independent of the number of servers. Servers that fail the check are excluded from reconstruction.
Is Security Proven?
BTX provides formal proofs of three security properties. Understanding what each property means in practice is as important as knowing that it holds.
1. Correctness
For any honestly generated batch of ciphertexts with B ≤ B_max, the decryption algorithm recovers all messages correctly with probability 1. This is a deterministic guarantee, not a probabilistic one.
2. IND-CCA Security (Chosen Ciphertext Attack Security)
IND-CCA security (Indistinguishability under Chosen Ciphertext Attack) is the standard gold standard for encryption schemes. It guarantees that an adversary cannot distinguish an encryption of message m₀ from an encryption of message m₁, even when allowed to query the decryption oracle on batches of their choice (subject to the constraint that the challenge ciphertext is not in any queried batch).
BTX achieves IND-CCA security under two assumptions:
n-Power Diffie-Hellman (nDH): The hardness assumption underlying the scheme. It states that given all powers of a random τ in G1 and G2 (except the missing middle power), it is computationally infeasible to distinguish [[τ^(n+1) · y]]T from a random GT element. This is a standard pairing-based hardness assumption.
Simulation-Extractable NIZK: The Schnorr proof attached to each ciphertext must be simulation-extractable, meaning a witness (the encryption randomness r) can be efficiently extracted from any valid proof. This is achieved using Schnorr plus Fiat-Shamir, proven in the Generic Group Model combined with the Random Oracle Model (GGM+ROM).
The use of GGM+ROM rather than the Algebraic Group Model (AGM) is a deliberate design choice. The AGM is a stronger and more controversial idealization that some researchers consider less conservative. The authors provide an argument that works in GGM+ROM, which is a more standard combination. This is a subtle but meaningful point for practitioners who care about the assumptions underlying their deployed cryptography.
3. Robustness
Robustness guarantees that an adversary who corrupts up to t servers cannot cause decryption of honestly-encrypted ciphertexts to fail by submitting malformed decryption shares. With the share verification check in place, malformed shares are detected and excluded before interpolation. Robustness holds as long as the adversary controls fewer than N/2 servers.
CCA Security in the Multi-Server Setting
CCA security in the threshold setting follows by a standard reduction from the single-server case. The security holds against adversaries that statically corrupt up to t servers before the protocol begins. Dynamic adaptive corruption (where the adversary can corrupt servers after seeing partial protocol output) is not addressed in this work and remains an open problem in the BTE literature generally.
Benchmarks
The implementation uses the BLS12-381 pairing-friendly curve with the blst library, compiled with Clang 21.1.8 in release mode with AVX-512, ADX, and native vectorization enabled. Experiments run on an AWS instance with an Intel Xeon Platinum 8488C processor at up to 3.8 GHz.
All three schemes (BTX, PFE, and BEAT++) are implemented in the same aggressively optimized shared codebase to ensure a fair comparison. The baselines are not lightly optimized reference implementations; they are best-effort implementations using the same engineering toolkit as BTX, including AVX-512 vectorization, FFT-optimized backends where applicable, and fine-grained micro-optimizations.
Total Decryption Time at B = 512 (Single Core)
Phase-by-Phase Breakdown
| Phase | BTX | PFE | BEAT++ | Notes |
|---|---|---|---|---|
| precompute(512) | ~0.959 ms/item | ~1.596 ms/item | ~0.960 ms/item | BTX 1.6× faster than PFE |
| partialDecrypt(512) | ~0.019 ms/item | ~0.019 ms/item | ~0.039 ms/item | BTX and PFE identical |
| open(512) | ~0.171 ms/item | ~0.723 ms/item | ~0.171 ms/item | BTX 4.2× faster (1 vs 4 pairings/slot) |
The open phase gain is the most significant. PFE requires 4 pairings per ciphertext slot in its opening step due to the partial fraction decomposition approach. BTX requires only 1. At scale, this is the dominant cost and the source of the 4.2× speedup in that phase.
Ciphertext Size Comparison
Every prior BTE scheme requires at least 2 G1 elements plus 1 GT element per ciphertext. BTX requires only 1 G1 element plus 1 GT element. On BLS12-381:
| Scheme | G1 Elements | GT Elements | Approximate Size |
|---|---|---|---|
| Prior best (PFE, BEAT++, etc.) | 2 | 1 | ~624 bytes + message |
| BTX | 1 | 1 | ~528 bytes + message |
This size reduction matters at the mempool level. An encrypted mempool at scale holds tens of thousands of pending transactions. Saving ~96 bytes per transaction at 10,000 pending transactions is nearly 1 MB of mempool storage and propagation overhead eliminated.
Why This Matters: Five Takeaways for Researchers and Builders
1. The Trilemma Is Solved
Every prior scheme forced a tradeoff between epoch-independence, collision-freedom, and computational efficiency. BTX is the first construction to achieve all three simultaneously, with no significant penalty on any axis. This is not an incremental improvement; it is a categorical advance in the design space.
2. The Middle-Product Technique Is Likely Reusable
The observation that batch cryptographic cross-term computations can be cast as polynomial middle products, and therefore FFT-accelerated with fixed preprocessing, is likely applicable beyond BTE. Protocol researchers working on other batched threshold primitives, aggregated signatures, or multi-recipient encryption schemes should examine whether their bottleneck cross-term computations have this structure.
3. Communication Efficiency Enables Large Validator Sets
O(1) per-server communication (one G1 element per server regardless of batch size) means BTX scales gracefully to large committee sizes. Ethereum's validator set numbers in the hundreds of thousands. Any practical encrypted mempool must not require communication that grows in both N and B. BTX satisfies this.
4. The Fixed-Size CRS Enables Long-Lived Deployments
Unlike Fernando et al.'s TrX scheme, BTX's CRS is fixed at 2·B_max elements and does not grow with the number of decryption sessions. For a blockchain running millions of blocks over years, this is a necessary property for any production deployment. A growing CRS would eventually become an unacceptable storage and bandwidth burden on all participants.
5. The Security Proof Is Conservative
The GGM+ROM proof for Fiat-Shamir/Schnorr is more conservative than the AGM-based arguments used by several competing schemes. For practitioners deploying cryptography in adversarial environments with long time horizons, the strength of the underlying assumptions matters. BTX's authors made a deliberate choice to avoid the stronger and more contested AGM idealization.
BTX is the right building block for production encrypted mempools today. It is fast enough to fit in real block time budgets, simple enough to implement and audit, conservative enough in its security assumptions, and flexible enough in its design (no index, no epoch, no coordination) to accommodate real-world transaction traffic patterns.
Glossary
The set of unconfirmed transactions broadcast to the network but not yet included in a block. Transactions in the mempool are publicly visible on most blockchains today.
Profit extracted by a block producer by reordering, inserting, or censoring transactions within blocks they produce. Formerly called Miner Extractable Value in proof-of-work contexts; the term was generalized when proof-of-stake became dominant.
An encryption scheme where the decryption key is distributed among N parties such that any t+1 of them can collectively decrypt, but fewer than t+1 cannot. This prevents any single party from being a decryption bottleneck or single point of failure.
A threshold encryption scheme optimized for the case where a committee must decrypt a specific chosen subset of ciphertexts from a larger pool, with total communication sublinear per server in the batch size and independent of the pool size.
An elliptic curve that supports an efficiently computable bilinear map (pairing) between elements of its associated groups. BLS12-381 is the specific curve used in BTX, widely adopted in blockchain cryptography due to its balance of security and performance.
A map e: G1 × G2 → GT satisfying bilinearity (e(aP, bQ) = e(P,Q)^(ab)) and non-degeneracy. This allows multiplication in the exponent without exposing the exponents, enabling a wide range of advanced cryptographic constructions.
A set of public parameters generated during a one-time trusted setup and used by all participants in a cryptographic scheme. In BTX, the CRS is a list of group elements encoding powers of a secret scalar τ, with one power deliberately omitted.
A variant of ElGamal encryption operating in the additive group of an elliptic curve or target pairing group. A ciphertext encrypts message m as (rG, m + rPK) where r is random and PK is the public key. BTX uses this structure in the GT group.
The operation of computing a linear combination Σ a_i · P_i of elliptic curve points, where a_i are scalars and P_i are group elements. MSM is one of the core computational primitives in pairing-based cryptography and is the dominant cost in the per-server threshold computation in BTX.
A polynomial operation that extracts a specific contiguous coefficient window from the product of two polynomials, without computing the full product. Middle products can be computed in O(n log n) using FFT, rather than the O(n²) cost of extracting terms naively.
A scheme for distributing a secret among N parties such that any t+1 shares suffice to reconstruct the secret via Lagrange interpolation, but any t or fewer shares reveal nothing. BTX uses Shamir sharing independently applied to each power of τ.
Indistinguishability under Chosen Ciphertext Attack. The standard security notion for encryption: an adversary cannot distinguish encryptions of two messages of their choice, even with access to a decryption oracle, as long as they do not query the oracle on the challenge ciphertext. This is the strongest standard notion of encryption security.
A non-interactive zero-knowledge proof system with an additional property: a witness (secret value) can be efficiently extracted from any valid proof, even in a setting where simulated proofs have been produced. In BTX, this is used to extract encryption randomness from client proofs, enabling CCA security.
A method for converting an interactive proof of knowledge into a non-interactive proof by replacing the verifier's random challenges with the output of a cryptographic hash function (modeled as a random oracle). BTX applies Fiat-Shamir to the Schnorr identification protocol to produce its NIZK proofs.
A security model where adversaries are treated as if they interact with the underlying group only through a generic oracle interface, without exploiting any special structure of the specific group implementation. Combined with the Random Oracle Model (ROM), GGM+ROM is the framework used for BTX's security proofs.
An instruction set extension for Intel processors enabling 512-bit wide SIMD (Single Instruction Multiple Data) operations. BTX's implementation exploits AVX-512 for vectorized finite field and elliptic curve arithmetic, significantly accelerating the low-level operations underlying pairing computations.
In the BTE context, a scheme is epochless if ciphertexts are not bound to a specific decryption session, block, or time period. Epochless schemes allow a transaction encrypted at any time to be included in any future block without re-encryption.
The property that the encryption process does not require any user-chosen index or parameter that could be exploited by validators to selectively prevent specific transactions from being decryptable. BTX achieves this by making encryption indexless and collision-free.