In (key-policy) attribute-based encryption (ABE) and predicate encryption (PE) a ciphertext is associated with an attribute `$a$<\/code> and the ciphertext can only be decrypted if a party has a secret key for a predicate `

`$f$<\/code> such that `

`$f(a) = 1$<\/code>. In the former, the attribute is public whereas in the latter, the attribute is hidden. Inner product predicates are a powerful class of predicates that are supported by several schemes, starting with the PE scheme of Katz, Sahai and Waters [1]. With inner product predicates, an attribute `

`$a$<\/code> corresponds to a vector `

`$\\mathbf{x}_a \\in \\mathbb{Z}_q^n$<\/code> and a predicate `

`$f$<\/code> corresponds to a vector `

`$\\mathbf{y}_f \\in \\mathbb{Z}_q^n$<\/code> and it holds that `

`$f(a) = 1$<\/code> if and only if `

`$\\langle\\mathbf{x}_a, \\mathbf{y}_f \\rangle \\equiv 0 \\mod{q}$<\/code> where the notation `

`$\\langle \\cdot, \\cdot \\rangle$<\/code> denotes the inner product of the two vectors. Now this turns out to be quite an expressive notion as many families of predicates can be realized. For each class of predicates, an encoding must be devised to map an attribute `

`$a$<\/code> to a vector `

`$\\mathbf{x}_a \\in \\mathbb{Z}_q^n$<\/code> for some dimension `

`$n$<\/code> and modulus `

`$q$<\/code>. In addition the encoding specifies how a policy `

`$f$<\/code> is encoded to a vector `

`$\\mathbf{y}_f \\in \\mathbb{Z}_q^n$<\/code>. In this post, I examine such vector encodings. An interesting paper to read concerning vector encodings for various classes of predicates is [2]. This paper studies the complexity of `

`$n$<\/code> and `

`$q$<\/code> for several classes of widely-used predicates. We now formalize the notion of a vector encoding for inner product predicates. From now on, we omit the mod `

`$q$<\/code> and assume that all arithmetic is performed in `

`$\\mathbb{Z}_q$<\/code>.<\/p>\n`

`\n`**1. <\/b>A ***vector encoding<\/em> scheme *`$\\mathcal{E}_{\\mathbb{F}, \\mathbb{A}}$<\/code> for a class of predicates ``$\\mathbb{F}$<\/code> and a set of attributes ``$\\mathbb{A}$<\/code> is a tuple of probabilistic polynomial time (PPT) algorithms defined as folows:`\n`$\\mathbb{\\mathsf{Setup}(1^\\lambda)}$<\/code>: On input a security parameter ``$\\lambda$<\/code>, output parameters ``$\\mathsf{params}$<\/code> which includes ``$n$<\/code>, the dimension of the vectors for the scheme, and ``$q$<\/code>, the modulus. The parameters ``$\\mathsf{params}$<\/code> are passed as an implicit input to the remaining algorithms.<\/li>\n`

`$\\mathbb{\\mathsf{EncodeA}(a)}$<\/code>: On input an attribute ``$a \\in \\mathbb{A}$<\/code>, output an encoding ``$\\mathbb{x}_a \\in X_a \\subset X \\subset \\mathbb{Z}_q^{n}$<\/code> where ``$X \\subset \\mathbb{Z}_q^n$<\/code> is the set of all attribute encodings and ``$X_a \\subset X$<\/code> is the subset of encodings for attribute ``$a$<\/code>.<\/li>\n`

`$\\mathbb{\\mathsf{EncodeP}(f)}$<\/code>: On input a predicate ``$f \\in \\mathbb{F}$<\/code>, output an encoding ``$\\mathbb{y}_f \\in Y_f \\subset Y \\subset \\mathbb{Z}_q^{n}$<\/code> where ``$Y\\subset \\mathbb{Z}_q^n$<\/code> is the set of all predicate encodings and ``$Y_f \\subset Y$<\/code> is the subset of encodings for predicate ``$f$<\/code>. <\/li>\n<\/ul>\n<\/div>\n\n`**2. <\/b>***Encoding Equivalence<\/em>:\nTwo encodings *`$\\mathbb{x} \\in X$<\/code> and ``$\\mathbb{x'} \\in X$<\/code> are said to be `*equivalent<\/em>, denoted by *`$\\mathbb{x} \\equiv \\mathbb{x'}$<\/code>, if and only if for all ``$f \\in \\mathbb{F}$<\/code> and all ``$\\mathbb{y}_f \\in Y_f$<\/code>, it holds that\n``$$ \\langle \\mathbb{x}, \\mathbb{y}_f \\rangle = 0 \\iff \\langle \\mathbb{x'}, \\mathbb{y}_f \\rangle = 0.$$<\/code>\n<\/div>\n`### Properties<\/h3>\n

For any attribute `$a \\in \\mathbb{A}$<\/code> and encoding ``$\\mathbb{x}_a \\in X_a$<\/code>, and any predicate ``$f \\in \\mathbb{F}$<\/code> and encoding ``$\\mathbb{y}_f \\in Y_f$<\/code>, the following properties hold:<\/p>\n`

`\n``$f(a) \\iff \\langle \\mathbb{x}_a, \\mathbb{y}_f \\rangle = 0$<\/code><\/li>\n`

`For any ``$r \\in \\mathbb{Z}_q$<\/code>, we have ``$r \\cdot \\mathbb{x}_a \\in X_a$<\/code> and ``$r \\cdot \\mathbb{y}_f \\in Y_f$<\/code>.<\/li>\n`

`$(X, +)$<\/code> is an additive group which has ``$(X_a, +)$<\/code> as a subgroup.<\/li>\n`

`For any ``$a' \\in \\mathbb{A}$<\/code> and ``$\\mathbb{x}_{a'} \\in X_{a'}$<\/code>. it holds that\n``$$ \\mathbb{x}_a + \\mathbb{x}_{a'} \\equiv \\mathbb{x}_{a \\odot a'} $$<\/code>\nwhere ``$\\mathbb{x}_{a \\odot a'} \\in X_{a \\odot a'}$<\/code> is an encoding of ``$a \\odot a'$<\/code> where ``$(\\mathbb{A}, \\odot)$<\/code> is a semilattice.<\/li>\n<\/ol>\n`Note that property 4 is of relevance only to homomorphic schemes - if this is of interest, see [3] and my previous post<\/a>.<\/p>\n

## Matching Words over a "Large" Alphabet<\/h2>\nI'm not sure if the encoding below has already been described somewhere else; I have personally not come across it. It is quite simple and natural, and may be useful.<\/p>\n

Let `$\\Sigma$<\/code> be an alphabet of size ``$m = |\\Sigma|$<\/code>. For the encoding we present below, we allow ``$m$<\/code> to be superpolynomial in ``$\\lambda$<\/code>. Consider attributes that are words in ``$\\Sigma^\\ast$<\/code> of length at most ``$\\ell$<\/code>. Therefore, we define the set of attributes as\n``$$\\mathbb{A} := \\{w \\in \\Sigma^\\ast : |w| \\leq \\ell\\}.$$<\/code>\nA predicate is associated with a `*pattern<\/em> which consists of a mix of symbols from *`$\\Sigma$<\/code> and wildcards or "don't cares", signified with the special symbol ``$\\ast$<\/code>. At some position ``$i$<\/code>, a symbol from ``$\\Sigma$<\/code> in the pattern at that position matches only that exact symbol in the attribute word occurring at that position whereas if the wildcard symbol appears at position ``$i$<\/code> in the pattern, it matches any symbol in the attribute word occurring at that position. Formally, the set of predicates is thus defined\n``$$ \\mathbb{F} := \\{\\mathsf{match}_p : p \\in (\\Sigma \\cup \\{\\ast\\})^\\ell\\}$$<\/code> where the predicate ``$\\mathsf{match}_p$<\/code> is defined as follows\n``$$\\mathsf{match}_p(w) = \\bigwedge_{i \\in [\\ell]} (p_i = \\ast) \\lor (i \\leq |w| \\land p_i = w_i).$$<\/code><\/p>\n`

`Now we give the vector encoding scheme for such a pair ``$(\\mathbb{F}, \\mathbb{A})$<\/code>.<\/p>\n`

`\n``$\\mathbb{\\mathsf{Setup}(1^\\lambda)}$<\/code>: On input a security parameter ``$\\lambda$<\/code>, set ``$n \\gets \\ell + 1$<\/code> and choose the smallest prime ``$q > \\mathsf{max}(m, 2^\\lambda)$<\/code>. Choose an efficiently computable injective map ``$\\psi : \\Sigma \\to \\mathbb{Z}^\\ast_q$<\/code>. Output ``$\\mathsf{params} := (n, q, \\psi)$<\/code>.<\/li>\n`

`$\\mathbb{\\mathsf{EncodeA}(w)}$<\/code>: On input an attribute ``$w \\in \\mathbb{A}$<\/code>, compute ``$t_i \\gets \\psi(w_i) \\in \\mathbb{Z}_q^\\ast$<\/code> for all ``$i \\in [|w|]$<\/code>, then set the vector ``$\\mathbf{x} \\gets (t_1, \\ldots, t_{|w|}, r_1, \\ldots, r_{\\ell - |w|}, 1) \\in \\mathbb{Z}_q^n$<\/code> where ``$r_j \\stackrel{\\$}{\\gets} \\mathbb{Z}_q$<\/code> for every ``$j \\in [\\ell - |w|]$<\/code>. Output encoding ``$\\mathbf{x} \\in \\mathbb{Z}_q^n$<\/code>.<\/li>\n`

`$\\mathbb{\\mathsf{EncodeP}(\\mathsf{match}_p)}$<\/code>: On input a predicate ``$\\mathsf{match}_p \\in \\mathbb{F}$<\/code> where ``$p \\in (\\Sigma \\cup \\{\\ast\\})^\\ell$<\/code>. Set ``$k \\gets 0$<\/code>. For every ``$i \\in [\\ell]$<\/code>, if ``$p_i = \\ast$<\/code> then set ``$y_i \\gets 0$<\/code>; else set ``$y_i \\gets \\psi(p_i)^{-1} \\in \\mathbb{Z}_q^\\ast$<\/code> and set ``$k \\gets k + 1$<\/code>. Output encoding ``$\\mathbf{y} := (y_1, \\ldots, y_\\ell, -k)$<\/code>.<\/li>\n<\/ul>\n`Note that in the above encoding for predicates, the statement `$y_i \\gets \\psi(p_i)^{-1}$<\/code> means that we are setting ``$y_i$<\/code> to the inverse element in ``$\\mathbb{Z}_q$<\/code> of the element ``$\\psi(p_i)$<\/code>; in particular, it does not refer to inverting the map ``$\\psi$<\/code> (which we would alternately write as ``$\\psi^{-1}$<\/code>). Furthermore, the variable ``$k$<\/code> signifies the number of non-wildcard symbols in the pattern ``$p$<\/code>.<\/p>\n`

`So that's it for now. I dont' have much else to say about vector encodings for the moment.<\/p>\n`

`References<\/h2>\n`

\n- \n
*Predicate Encryption Supporting Disjunctions, Polynomial Equations, and Inner Products.<\/i>. Jonathan Katz, Amit Sahai and Brent Waters. EUROCRYPT. 2008. \n<\/li>\n* *\n**On the Inner Product Predicate and a Generalization of Matching Vector Families.<\/i>. Balthazar Bauer, Jevgenijs Vihrovs and Hoeteck Wee. IACR Cryptology ePrint Archive. 2018. \n<\/li>\n**\n**A Note on Attribute-Based Group Homomorphic Encryption<\/i>.Michael Clear and Ciaran McGoldrick. Cryptology ePrint Archive, Report 2017\/752. 2017. \n<\/li>\n<\/ol>\n","raw":"\n## Vector Encoding Schemes\nIn (key-policy) attribute-based encryption (ABE) and predicate encryption (PE) a ciphertext is associated with an attribute `$a$` and the ciphertext can only be decrypted if a party has a secret key for a predicate `$f$` such that `$f(a) = 1$`. In the former, the attribute is public whereas in the latter, the attribute is hidden. Inner product predicates are a powerful class of predicates that are supported by several schemes, starting with the PE scheme of Katz, Sahai and Waters !cite{conf\/eurocrypt\/KatzSW08}. With inner product predicates, an attribute `$a$` corresponds to a vector `$\\mathbf{x}_a \\in \\mathbb{Z}_q^n$` and a predicate `$f$` corresponds to a vector `$\\mathbf{y}_f \\in \\mathbb{Z}_q^n$` and it holds that `$f(a) = 1$` if and only if `$\\langle\\mathbf{x}_a, \\mathbf{y}_f \\rangle \\equiv 0 \\mod{q}$` where the notation `$\\langle \\cdot, \\cdot \\rangle$` denotes the inner product of the two vectors. Now this turns out to be quite an expressive notion as many families of predicates can be realized. For each class of predicates, an encoding must be devised to map an attribute `$a$` to a vector `$\\mathbf{x}_a \\in \\mathbb{Z}_q^n$` for some dimension `$n$` and modulus `$q$`. In addition the encoding specifies how a policy `$f$` is encoded to a vector `$\\mathbf{y}_f \\in \\mathbb{Z}_q^n$`. In this post, I examine such vector encodings. An interesting paper to read concerning vector encodings for various classes of predicates is !cite{journals\/iacr\/BauerVW18}. This paper studies the complexity of `$n$` and `$q$` for several classes of widely-used predicates. We now formalize the notion of a vector encoding for inner product predicates. From now on, we omit the mod `$q$` and assume that all arithmetic is performed in `$\\mathbb{Z}_q$`.\n!definition{def:ve}{\nA *vector encoding* scheme `$\\mathcal{E}_{\\mathbb{F}, \\mathbb{A}}$` for a class of predicates `$\\mathbb{F}$` and a set of attributes `$\\mathbb{A}$` is a tuple of probabilistic polynomial time (PPT) algorithms defined as folows:\n- `$\\mathbb{\\mathsf{Setup}(1^\\lambda)}$`: On input a security parameter `$\\lambda$`, output parameters `$\\mathsf{params}$` which includes `$n$`, the dimension of the vectors for the scheme, and `$q$`, the modulus. The parameters `$\\mathsf{params}$` are passed as an implicit input to the remaining algorithms.\n- `$\\mathbb{\\mathsf{EncodeA}(a)}$`: On input an attribute `$a \\in \\mathbb{A}$`, output an encoding `$\\mathbb{x}_a \\in X_a \\subset X \\subset \\mathbb{Z}_q^{n}$` where `$X \\subset \\mathbb{Z}_q^n$` is the set of all attribute encodings and `$X_a \\subset X$` is the subset of encodings for attribute `$a$`.\n- `$\\mathbb{\\mathsf{EncodeP}(f)}$`: On input a predicate `$f \\in \\mathbb{F}$`, output an encoding `$\\mathbb{y}_f \\in Y_f \\subset Y \\subset \\mathbb{Z}_q^{n}$` where `$Y\\subset \\mathbb{Z}_q^n$` is the set of all predicate encodings and `$Y_f \\subset Y$` is the subset of encodings for predicate `$f$`. \n}\n\n!definition{def:ve_equiv}{*Encoding Equivalence*:\nTwo encodings `$\\mathbb{x} \\in X$` and `$\\mathbb{x'} \\in X$` are said to be *equivalent*, denoted by `$\\mathbb{x} \\equiv \\mathbb{x'}$`, if and only if for all `$f \\in \\mathbb{F}$` and all `$\\mathbb{y}_f \\in Y_f$`, it holds that\n`$$ \\langle \\mathbb{x}, \\mathbb{y}_f \\rangle = 0 \\iff \\langle \\mathbb{x'}, \\mathbb{y}_f \\rangle = 0.$$`\n}\n\n### Properties\nFor any attribute `$a \\in \\mathbb{A}$` and encoding `$\\mathbb{x}_a \\in X_a$`, and any predicate `$f \\in \\mathbb{F}$` and encoding `$\\mathbb{y}_f \\in Y_f$`, the following properties hold:\n1. `$f(a) \\iff \\langle \\mathbb{x}_a, \\mathbb{y}_f \\rangle = 0$`\n2. For any `$r \\in \\mathbb{Z}_q$`, we have `$r \\cdot \\mathbb{x}_a \\in X_a$` and `$r \\cdot \\mathbb{y}_f \\in Y_f$`.\n3. `$(X, +)$` is an additive group which has `$(X_a, +)$` as a subgroup.\n4. For any `$a' \\in \\mathbb{A}$` and `$\\mathbb{x}_{a'} \\in X_{a'}$`. it holds that\n`$$ \\mathbb{x}_a + \\mathbb{x}_{a'} \\equiv \\mathbb{x}_{a \\odot a'} $$`\nwhere `$\\mathbb{x}_{a \\odot a'} \\in X_{a \\odot a'}$` is an encoding of `$a \\odot a'$` where `$(\\mathbb{A}, \\odot)$` is a semilattice.\n\nNote that property 4 is of relevance only to homomorphic schemes - if this is of interest, see !cite{ClearM17} and my [previous post](https:\/\/scss.tcd.ie\/~mclear\/blog?post=ghe).\n\n## Matching Words over a \"Large\" Alphabet\nI'm not sure if the encoding below has already been described somewhere else; I have personally not come across it. It is quite simple and natural, and may be useful.\n\nLet `$\\Sigma$` be an alphabet of size `$m = |\\Sigma|$`. For the encoding we present below, we allow `$m$` to be superpolynomial in `$\\lambda$`. Consider attributes that are words in `$\\Sigma^\\ast$` of length at most `$\\ell$`. Therefore, we define the set of attributes as\n`$$\\mathbb{A} := \\{w \\in \\Sigma^\\ast : |w| \\leq \\ell\\}.$$`\nA predicate is associated with a *pattern* which consists of a mix of symbols from `$\\Sigma$` and wildcards or \"don't cares\", signified with the special symbol `$\\ast$`. At some position `$i$`, a symbol from `$\\Sigma$` in the pattern at that position matches only that exact symbol in the attribute word occurring at that position whereas if the wildcard symbol appears at position `$i$` in the pattern, it matches any symbol in the attribute word occurring at that position. Formally, the set of predicates is thus defined\n`$$ \\mathbb{F} := \\{\\mathsf{match}_p : p \\in (\\Sigma \\cup \\{\\ast\\})^\\ell\\}$$` where the predicate `$\\mathsf{match}_p$` is defined as follows\n`$$\\mathsf{match}_p(w) = \\bigwedge_{i \\in [\\ell]} (p_i = \\ast) \\lor (i \\leq |w| \\land p_i = w_i).$$`\n\nNow we give the vector encoding scheme for such a pair `$(\\mathbb{F}, \\mathbb{A})$`.\n- `$\\mathbb{\\mathsf{Setup}(1^\\lambda)}$`: On input a security parameter `$\\lambda$`, set `$n \\gets \\ell + 1$` and choose the smallest prime `$q > \\mathsf{max}(m, 2^\\lambda)$`. Choose an efficiently computable injective map `$\\psi : \\Sigma \\to \\mathbb{Z}^\\ast_q$`. Output `$\\mathsf{params} := (n, q, \\psi)$`.\n- `$\\mathbb{\\mathsf{EncodeA}(w)}$`: On input an attribute `$w \\in \\mathbb{A}$`, compute `$t_i \\gets \\psi(w_i) \\in \\mathbb{Z}_q^\\ast$` for all `$i \\in [|w|]$`, then set the vector `$\\mathbf{x} \\gets (t_1, \\ldots, t_{|w|}, r_1, \\ldots, r_{\\ell - |w|}, 1) \\in \\mathbb{Z}_q^n$` where `$r_j \\stackrel{\\$}{\\gets} \\mathbb{Z}_q$` for every `$j \\in [\\ell - |w|]$`. Output encoding `$\\mathbf{x} \\in \\mathbb{Z}_q^n$`.\n- `$\\mathbb{\\mathsf{EncodeP}(\\mathsf{match}_p)}$`: On input a predicate `$\\mathsf{match}_p \\in \\mathbb{F}$` where `$p \\in (\\Sigma \\cup \\{\\ast\\})^\\ell$`. Set `$k \\gets 0$`. For every `$i \\in [\\ell]$`, if `$p_i = \\ast$` then set `$y_i \\gets 0$`; else set `$y_i \\gets \\psi(p_i)^{-1} \\in \\mathbb{Z}_q^\\ast$` and set `$k \\gets k + 1$`. Output encoding `$\\mathbf{y} := (y_1, \\ldots, y_\\ell, -k)$`.\n\nNote that in the above encoding for predicates, the statement `$y_i \\gets \\psi(p_i)^{-1}$` means that we are setting `$y_i$` to the inverse element in `$\\mathbb{Z}_q$` of the element `$\\psi(p_i)$`; in particular, it does not refer to inverting the map `$\\psi$` (which we would alternately write as `$\\psi^{-1}$`). Furthermore, the variable `$k$` signifies the number of non-wildcard symbols in the pattern `$p$`.\n\nSo that's it for now. I dont' have much else to say about vector encodings for the moment.","timestamp":1602854822,"edited":1602854822},{"updates":[{"title":"Update","slug":"ghe#1","summary":"ghe","image":"","tags":["ghe"],"content":"*This is being a bit pedantic but I need to return briefly to the attribute semilattices section and the semilattice of finite words over an alphabet (in our example, the alphabet `$\\{0, 1\\}$<\/code> with the maximum length of words being ``$n$<\/code>) in which the semilattice operation ``$\\odot$<\/code> returns the prefix common to two words. Now our definition of the set ``$V$<\/code> is satisfactory for "initial" attributes. However it is not closed under the semilattice operation that is defined to be set intersection. The reason is that "gaps" may appear in a resulting set; that is, positions ``$i$<\/code> where neither ``$(i, 0)$<\/code> nor ``$(i, 1)$<\/code> is in the set. In such sets, we need to specify how a set is interperted as a word? A simple solution is to define the word associated with a set as the contiguous string of symbols from the beginning until the "first" gap. For words of length ``$n$<\/code>, we can imagine a gap at position ``$n + 1$<\/code>. Now we can capture the set of valid attribute sets closed under set intersection as follows\n``$$ V := \\{W \\subset U : \\exists i \\in [n + 1] \\;\\;\\; (i, 0) \\notin W \\land (i, 1) \\notin W \\land ((j, 0) \\in W) \\neq ((j, 1) \\in W)\\;\\forall j \\in [i - 1]\\}.$$<\/code>\nThe above still permits positions ``$i$<\/code> that have both pairs ``$(i, 0)$<\/code> and ``$(i, 1)$<\/code> in the set, but they occur after the "first" gap and are therefore considered irrelevant. If we wish to exclude both pairs from occurring at any position while capturing everything else captured by the above definition, then a more terse expression is\n``$$ V := \\{W \\subset U : \\forall i \\in [n] \\;\\;\\; \\neg ((i, 0) \\in W \\land (i, 1) \\in W)\\}.$$<\/code><\/p>","raw":"This is being a bit pedantic but I need to return briefly to the attribute semilattices section and the semilattice of finite words over an alphabet (in our example, the alphabet `$\\{0, 1\\}$` with the maximum length of words being `$n$`) in which the semilattice operation `$\\odot$` returns the prefix common to two words. Now our definition of the set `$V$` is satisfactory for \"initial\" attributes. However it is not closed under the semilattice operation that is defined to be set intersection. The reason is that \"gaps\" may appear in a resulting set; that is, positions `$i$` where neither `$(i, 0)$` nor `$(i, 1)$` is in the set. In such sets, we need to specify how a set is interperted as a word? A simple solution is to define the word associated with a set as the contiguous string of symbols from the beginning until the \"first\" gap. For words of length `$n$`, we can imagine a gap at position `$n + 1$`. Now we can capture the set of valid attribute sets closed under set intersection as follows\n`$$ V := \\{W \\subset U : \\exists i \\in [n + 1] \\;\\;\\; (i, 0) \\notin W \\land (i, 1) \\notin W \\land ((j, 0) \\in W) \\neq ((j, 1) \\in W)\\;\\forall j \\in [i - 1]\\}.$$`\nThe above still permits positions `$i$` that have both pairs `$(i, 0)$` and `$(i, 1)$` in the set, but they occur after the \"first\" gap and are therefore considered irrelevant. If we wish to exclude both pairs from occurring at any position while capturing everything else captured by the above definition, then a more terse expression is\n`$$ V := \\{W \\subset U : \\forall i \\in [n] \\;\\;\\; \\neg ((i, 0) \\in W \\land (i, 1) \\in W)\\}.$$`","timestamp":1602401902,"edited":1602401902}],"title":"Another Look at Attribute-Based Group Homomorphic Encryption","slug":"ghe","summary":"ghe","image":"","tags":["ghe"],"content":"`

`In this post I expand upon the notions described in my paper "A Note on Attribute-Based Group Homomorphic Encryption" [1]. This post assumes that you have read and are familar with this paper. The same notation defined therein is used here.<\/p>\n`

`The first thing I want to do is to rigorously define a function ``$\\tau : \\mathbb{A} \\to \\mathbb{F}$<\/code> that is used in one of my papers, the full version of [2], which is `

`on arXiv<\/a>, but is regrettably not properly defined in that paper. The function ``$\\tau$<\/code> is needed by an abstraction called GIFT on which the characterization of IND-AD-CCA1 security in that paper relies. At the end of the post, I give an overview of this characterization and explain where ``$\\tau$<\/code> fits in, but for now, I'm going to make an attempt at its proper definition.<\/p>\n`

`The Minimal Policy Associated with an Attribute<\/h2>\n`

\n**1. <\/b>We can define a partial order **`$\\preceq$<\/code> on the set of attributes ``$\\mathbb{A}$<\/code> which follows from the attribute semilattice ``$(\\mathbb{A}, \\odot)$<\/code>.\n``$$ a \\preceq b := f(a) \\implies f(b) \\;\\;\\;\\;\\;\\; \\forall f \\in \\mathbb{F}. $$<\/code>The following is an invariant: ``$a \\preceq b \\iff a \\odot b = a$<\/code>.\n<\/div>\n`

\n**2. <\/b>The ***minimal policy<\/em> with respect to *`$a \\in \\mathbb{A}$<\/code> is the unique predicate ``$f \\in \\mathbb{F}$<\/code> satisfying`

\n`$$ f(a) \\land a \\preceq a' \\; \\forall a' \\in \\mathsf{supp}(f). $$<\/code>\nWe define the map ``$\\tau : \\mathbb{A} \\to \\mathbb{F}$<\/code> that maps an attribute ``$a$<\/code> to its minimal policy ``$f$<\/code>.\n<\/div>\n`

\n**3. <\/b>The set of policies **`$\\mathbb{F}$<\/code> is said to be `*complete<\/em> iff *`$\\tau$<\/code> is efficiently computable and ``$\\tau(a)$<\/code> exists in ``$\\mathbb{F}$<\/code> for all ``$a \\in \\mathbb{A}$<\/code>.\n<\/div>\n`So now we are equipped with what we need for the characterization of IND-AD-CCA1 security described at the end of the post, but for now, I'm going to elaborate a bit more on attribute semilattices.<\/p>\n

# Attribute Semilattices<\/h1>\n

We now recall the equivalence relation `$\\sim$<\/code> that partitions the class of access policies ``$\\mathbb{F}$<\/code> defined in Section 3.2 in [1]. The relation is defined thus\n``$$f \\sim g := \\mathsf{supp}(f) \\cap \\mathsf{supp}(g) \\neq \\emptyset.$$<\/code> This relation induces a partition also on the set of attributes ``$\\mathbb{A}$<\/code>. Let ``$F_1, \\ldots, F_n \\subset \\mathbb{F}$<\/code> be the equivalence classes in ``$\\mathbb{F} \/ \\sim$<\/code>. Then let ``$A_1, \\ldots, A_n \\subset \\mathbb{A}$<\/code> be the pairwise disjoint subsets of attributes in ``$\\mathbb{A}$<\/code> that collectively form a partition of ``$\\mathbb{A}$<\/code> where ``$A_i = \\bigcup_{f \\in F_i} \\mathsf{supp}(f)$<\/code>. Each ``$A_i$<\/code> has its own identity element, and therefore ``$(A_i, \\odot)$<\/code> is a bounded semilattice. The directed graph corresponding to the Hasse diagram for the partial order ``$(\\mathbb{A}, \\preceq)$<\/code> has ``$n$<\/code> disconnected components. The supremum of ``$A_i$<\/code> is the identity element in ``$A_i$<\/code>. Now in many cases, the partial orders ``$(A_i, \\preceq)$<\/code> may be lattices i.e. they have a dual operation to ``$\\odot$<\/code> and hence there is both a natural meet and join semilattice. This is the case for the example of the power set of a finite universe considered in Section 4.1 in [1]. The vector matching example in Section 4.2 in [1] is a special case of the power set example by choosing an appropriate representation. For example, the vector ``$(1, 0, 1)$<\/code> could be represented by the set of pairs ``$\\{(1, 1), (2, 0), (3, 1)\\}$<\/code> and an access policy associated with the pattern ``$(1, \\ast, 1)$<\/code> is represented by the set ``$\\{(1, 1), (3, 1)\\}$<\/code> i.e. the pairs associated with wildcards in the pattern are excluded and hence, the latter set is a subset of the former. Then the ``$\\odot$<\/code> operation naturally corresponds to set intersection. However this semilattice has no natural dual that is meaningful in the context of binary vector matching. Therefore, this attribute semilattice is not a lattice.<\/p>\n`

`Another well-known example of a semilattice is the set of finite words over an alphabet ordered by prefix where ``$\\odot$<\/code> is defined as returning the prefix common to two words which may in fact be the empty word ``$\\epsilon$<\/code>. This semilattice has a natural representation via sets akin to our representation for the previous example. An issue with this representation is that it permits sets corresponding to invalid attributes. For a binary alphabet, we define the set of representative sets associated with well-formed attributes as follows. Let ``$U = \\{(i, b) \\in [n] \\times \\{0, 1\\}\\}$<\/code> and define ``$V := \\{W \\subset U : (i, b) \\in W \\implies ((j, 0) \\in W) \\neq ((j, 1) \\in W)\\;\\;\\forall j \\in [i], i \\in [n], b \\in \\{0, 1\\}\\}$<\/code> (what is meant by ``$((j, 0) \\in W) \\neq ((j, 1) \\in W)$<\/code> is that either ``$(j, 0)$<\/code> or ``$(j, 1)$<\/code> is in ``$W$<\/code> but not both - i.e. exclusive OR) . Now ``$V$<\/code> consists of the sets representing well-formed attributes, which are finite words in ``$\\{w \\in \\{0, 1\\}^\\ast : |w| \\leq n\\}$<\/code>. Recall that the policies match prefixes, so an attribute associated with a certain word matches a policy if it contains the prefix associated with that policy. The structure we have is a tree rooted at the empty string ``$\\epsilon$<\/code>. Every node corresponds to a minimal policy for the prefix obtained by the path from the root to that node. Every node is also associated with an attribute. The attribute at a node matches all policies on the path from the root to the node and the policy at a node matches all attributes at itself and its children. Now this semilattice is not bounded i.e. it does not have an identity element. But this makes it not suitable for ABGHE. We therefore have to add a logical identity element. Now if ``$\\odot$<\/code> is defined as set intersection, a natural choice is the set ``$U$<\/code>. While this is not a valid attribute (``$U \\notin V$<\/code>), it can be assigned a special meaning in applications. By the transitivity property of ``$\\sim$<\/code>, it is easy to see that a single-rooted tree has a single identity element. If we exclude the empty string, the corresponding tree will have two disconnected components and there will be a partition of policies ``$\\{F_0, F_1\\}$<\/code> and of attributes ``$\\{A_0, A_1\\}$<\/code> where ``$A_0$<\/code> will have identity element ``$U \\setminus \\{(1, 1)\\}$<\/code> and ``$A_1$<\/code> will have identity element ``$U \\setminus \\{(1, 0)\\}$<\/code>.<\/p>\n`

`It is worth pointing out that we can define ``$a \\odot a'$<\/code> as the attribute associated with the lowest common ancestor of the node associated with ``$a$<\/code> and the node associated with ``$a'$<\/code>. For any node in the tree, the semilattice operation returns that node if the two operands are a node from its left subtree and a node from its right subtree (in any order). This is straightforward to define when we have such a tree structure.<\/p>\n`

`This got me thinking about the general case; that is, for arbitrary semilattices, and wondering: is there a simple, equivalent graph-theoretic definition of the semilattice operation? Here is a simple definition that equivalently describes the semilattice operation from a graph theory viewpoint. Let ``$G$<\/code> be the directed graph corresponding to the Hasse diagram for the partial order induced by the semilattice operation. Now let ``$G'$<\/code> be the undirected version of ``$G$<\/code> obtained by replacing every directed edge with an undirected edge. Suppose we wish to compute ``$a \\odot a'$<\/code> for attribute ``$a$<\/code> represented by vertex ``$v$<\/code> and attribute ``$a'$<\/code> represented by vertex ``$v'$<\/code>. Then we can define ``$a \\odot a'$<\/code> as the attribute associated with the vertex that intersects both a shortest path in ``$G'$<\/code> from ``$v$<\/code> to the inimum vertex and a shortest path in ``$G'$<\/code> from ``$v'$<\/code> to the infimum vertex and whose distance is greatest from the infimum vertex. Note that there may be more than one "shortest" path (hence, the slightly ambiguous wording) i.e. there may be several equal-length paths whose length is minimal, which we refer to as "shortest".<\/p>\n`

`Initially I was going to, at this point, talk a bit about vector encodings for inner product predicates (which could realize the above semilattice in a scheme such as KSW [3] for example) but I have decided to defer a discussion on these encodings to a later post. <\/p>\n`

`Characterization of IND-AD-CCA1 Security<\/h2>\n`

With the definition of `$\\tau$<\/code> above under our belt, we can proceed to an overview of the characterization of IND-AD-CCA1 security from my paper `

`on arXiv<\/a>. <\/p>\n`

`\n`**4. <\/b>A normal subgroup **`$N \\subseteq G$<\/code> of a group ``$G$<\/code> satisfies the property that for all ``$n \\in N$<\/code>, it holds that\n``$$g n g^{-1} \\in N\\;\\;\\;\\forall g \\in G.$$<\/code>\nIn other words, ``$N$<\/code> remains invariant under conjugation.\n<\/div>\n`It is precisely the normal subgroups (and no other subgroups) that can be used to construct quotient groups. All subgroups of an abelian group are normal. THe ABGHE schemes we mostly make reference to associate an ableian group of ciphertexts with every policy and all subgroups thereof are therefore normal.<\/p>\n

\n**5. <\/b>Let **`$f \\in \\mathbb{F}$<\/code> be policy and ``$\\mathcal{C}_f \\subset \\hat{\\mathcal{C}}$<\/code> be a subgroup of the ciphertext space associated with ``$f$<\/code>. Furthermore, let ``$\\mathcal{N}_f \\subset \\mathcal{C}_f$<\/code> be a normal subgroup of ``$\\mathcal{C}_f$<\/code>. Then a `*set of representatives<\/em> *`$\\mathcal{R}_f \\subset \\mathcal{C}_f$<\/code> is a set of ciphertexts where each one is a unique representative of an equivalence class in the quotient group ``$\\mathcal{C}_f \/ \\mathcal{N}_f$<\/code>. For every ``$r \\in \\mathcal{R}_f$<\/code>, the corresponding equivalence class is ``$\\{rn \\in \\mathcal{C}_f : n \\in \\mathcal{N}_f\\}$<\/code>.\n<\/div>\n`

I'm now going to discuss an abstraction that is defined in [4] for the public-key setting and an informal description is provided in the full version on arXiv<\/a> of [2] in the context of predicate encryption, but that treatment is relevent for the attribute-based setting since no use is made of attribute privacy. The abstraction is called GIFT (Generic shIFT-Type) from Defintion 3 in [4]. I'm going to repeat the informal description from the full version of my Africacrypt 2013 paper, specifically generalizing to the attribute-based setting and correcting for any small variations in notation. As an aside, I'm not too happy with Appendix A of my paper on arXiv - the writing is densely-packed and no space is afforded to intuitions and gentle discussion. So now let us discuss the properties of a GIFT scheme in the context of ABGHE. We list each of the requirements as follows:<\/p>\n

\n- The set of policies
`$\\mathbb{F}$<\/code> is `*complete<\/em> as per Definition 3.<\/li>\n*

*The public parameters *`$\\mathsf{PP}$<\/code> contains information to determine a non-trivial, proper normal subgroup ``$\\mathcal{N}_f$<\/code> of every group ``$\\mathcal{C}_f$<\/code>. Intuitively, ``$\\mathcal{N}_f$<\/code> consists of the honest encryptions of the identity element of the message space ``$\\mathcal{M}$<\/code> (say zero) under attribute ``$a$<\/code>, the unique attribute such that ``$\\tau(a) = f$<\/code>.<\/li>\n`

`It holds that for every ``$f, g \\in \\mathbb{F}$<\/code>, the systems of representatives ``$\\mathcal{R}_f = \\mathcal{C}_f \/ \\mathcal{N}_f$<\/code> and ``$\\mathcal{R}_g = \\mathcal{C}_g \/ \\mathcal{N}_g$<\/code> have the same cardinality; that is, ``$|\\mathcal{R}_f| = |\\mathcal{R}_g|$<\/code>.<\/li>\n`

`$\\mathsf{PP}$<\/code> contains an efficient function ``$\\psi : \\mathbb{F} \\times \\mathcal{M} \\to \\hat{\\mathcal{C}}$<\/code> with the property that ``$\\psi_f = \\psi(f, \\cdot)$<\/code> for any ``$f \\in \\mathbb{F}$<\/code> is an isomorphism between ``$\\mathcal{M}$<\/code> and ``$\\mathcal{R}_f$<\/code>.<\/li>\n`

`To encrypt a message ``$m \\in \\mathcal{M}$<\/code> under attribute ``$a \\in \\mathbb{A}$<\/code>, perform the following steps:\n`

\n- compute
`$f \\gets \\tau(a)$<\/code>.<\/li>\n`

`choose a random ``$n \\stackrel{\\$}{\\gets} \\mathcal{N}_f$<\/code>.<\/li>\n`

`output ciphertext ``$\\psi_f(m) \\ast n \\in \\mathcal{C}_f$<\/code>.<\/li>\n<\/ol><\/li>\n`

`A secret key for policy ``$f \\in \\mathbb{F}$<\/code> contains an efficient description of ``$\\psi_f^{-1} \\circ \\mu_f$<\/code> where ``$\\mu_f : \\hat{\\mathcal{C}}_f \\to \\mathcal{R}_f$<\/code> such that ``$r = \\mu_f(c)$<\/code> is the unique representative with ``$c = r \\ast n$<\/code> where ``$n \\in \\mathcal{N}_f$<\/code>.<\/li>\n<\/ul>\n`The next thing we want to do is to identify conditions for a GIFT scheme to satisfy IND-AD-CCA1 security. Theorem 3 in the full version of my Africacrypt paper on arXiv shows that a GIFT ABGHE scheme is IND-AD-CCA1 secure if and only if a problem called ISOAP is hard. So what is this ISOAP problem?<\/p>\n

ISOAP (Interactive Splitting Oracle-Assisted (Subgroup Membership) Problem) is a subgroup membership problem with access to a *splitting oracle<\/em>, which we will define momentarily. The goal is to determine whether an element *`$c \\in \\mathcal{C}_f$<\/code> is a member of ``$\\mathcal{N}_f \\subset \\mathcal{C}_f$<\/code>. Queries can be made to the splitting oracle before the challenge is received. A splitting oracle for a group ``$\\mathcal{C}_f$<\/code> with normal subgroup ``$\\mathcal{N}_f$<\/code> answers with ``$(r, n)$<\/code> when queried with an element ``$c \\in \\mathcal{C}_f$<\/code> such that ``$c = r \\ast n$<\/code>. In fact, it is sufficient for the oracle to return ``$r$<\/code> since ``$n$<\/code> is trivially obtained by computing ``$c \\ast r^{-1}$<\/code>. The splitting oracle corresponds to the decryption oracle in the IND-AD-CCA1 security game where the target attribute is ``$a \\in \\mathbb{A}$<\/code> and we have that ``$f = \\tau(a) \\in \\mathbb{F}$<\/code>. Theorem 3 in the full version of my paper on arXiv shows that the hardness of ISOAP and IND-AD-CCA1 security are equivalent for a GIFT PE scheme, but since the proof does not rely on attribute privacy, the equivalence also holds for a GIFT ABGHE scheme. KSW [3] is an example of a GIFT ABGHE scheme with respect to several attribute semilattices and classes of policies such as those discussed above and in [1], but it has not been shown (to the best of our knowledge) to be CCA1 secure.<\/p>\n`

`References<\/h2>\n`

\n- \n
*A Note on Attribute-Based Group Homomorphic Encryption<\/i>.Michael Clear and Ciaran McGoldrick. Cryptology ePrint Archive, Report 2017\/752. 2017. \n<\/li>\n* *\n**Homomorphic Encryption with Access Policies: Characterization and New Constructions<\/i>. Michael Clear, Arthur Hughes and Hitesh Tewari. AFRICACRYPT. 2013. \n<\/li>\n**\n**Predicate Encryption Supporting Disjunctions, Polynomial Equations, and Inner Products.<\/i>. Jonathan Katz, Amit Sahai and Brent Waters. EUROCRYPT. 2008. \n<\/li>\n**\n**Group homomorphic encryption: characterizations, impossibility results, and applications.<\/i>. Frederik Armknecht, Stefan Katzenbeisser and Andreas Peter. Des. Codes Cryptogr.. 2013. \n<\/li>\n<\/ol>\n","raw":"In this post I expand upon the notions described in my paper \"A Note on Attribute-Based Group Homomorphic Encryption\" !cite{ClearM17}. This post assumes that you have read and are familar with this paper. The same notation defined therein is used here.\n\nThe first thing I want to do is to rigorously define a function `$\\tau : \\mathbb{A} \\to \\mathbb{F}$` that is used in one of my papers, the full version of !cite{conf\/africacrypt\/ClearHT13}, which is [on arXiv](https:\/\/arxiv.org\/pdf\/1302.1192.pdf), but is regrettably not properly defined in that paper. The function `$\\tau$` is needed by an abstraction called GIFT on which the characterization of IND-AD-CCA1 security in that paper relies. At the end of the post, I give an overview of this characterization and explain where `$\\tau$` fits in, but for now, I'm going to make an attempt at its proper definition.\n\n## The Minimal Policy Associated with an Attribute\n!definition{def:order}{\nWe can define a partial order `$\\preceq$` on the set of attributes `$\\mathbb{A}$` which follows from the attribute semilattice `$(\\mathbb{A}, \\odot)$`.\n`$$ a \\preceq b := f(a) \\implies f(b) \\;\\;\\;\\;\\;\\; \\forall f \\in \\mathbb{F}. $$`The following is an invariant: `$a \\preceq b \\iff a \\odot b = a$`.\n}\n\n\n!definition{def:min}{\nThe *minimal policy* with respect to `$a \\in \\mathbb{A}$` is the unique predicate `$f \\in \\mathbb{F}$` satisfying \n`$$ f(a) \\land a \\preceq a' \\; \\forall a' \\in \\mathsf{supp}(f). $$`\nWe define the map `$\\tau : \\mathbb{A} \\to \\mathbb{F}$` that maps an attribute `$a$` to its minimal policy `$f$`.\n}\n\n!definition{def:complete}{\nThe set of policies `$\\mathbb{F}$` is said to be *complete* iff `$\\tau$` is efficiently computable and `$\\tau(a)$` exists in `$\\mathbb{F}$` for all `$a \\in \\mathbb{A}$`.\n}\n\nSo now we are equipped with what we need for the characterization of IND-AD-CCA1 security described at the end of the post, but for now, I'm going to elaborate a bit more on attribute semilattices.\n\n# Attribute Semilattices\nWe now recall the equivalence relation `$\\sim$` that partitions the class of access policies `$\\mathbb{F}$` defined in Section 3.2 in !cite{ClearM17}. The relation is defined thus\n`$$f \\sim g := \\mathsf{supp}(f) \\cap \\mathsf{supp}(g) \\neq \\emptyset.$$` This relation induces a partition also on the set of attributes `$\\mathbb{A}$`. Let `$F_1, \\ldots, F_n \\subset \\mathbb{F}$` be the equivalence classes in `$\\mathbb{F} \/ \\sim$`. Then let `$A_1, \\ldots, A_n \\subset \\mathbb{A}$` be the pairwise disjoint subsets of attributes in `$\\mathbb{A}$` that collectively form a partition of `$\\mathbb{A}$` where `$A_i = \\bigcup_{f \\in F_i} \\mathsf{supp}(f)$`. Each `$A_i$` has its own identity element, and therefore `$(A_i, \\odot)$` is a bounded semilattice. The directed graph corresponding to the Hasse diagram for the partial order `$(\\mathbb{A}, \\preceq)$` has `$n$` disconnected components. The supremum of `$A_i$` is the identity element in `$A_i$`. Now in many cases, the partial orders `$(A_i, \\preceq)$` may be lattices i.e. they have a dual operation to `$\\odot$` and hence there is both a natural meet and join semilattice. This is the case for the example of the power set of a finite universe considered in Section 4.1 in !cite{ClearM17}. The vector matching example in Section 4.2 in !cite{ClearM17} is a special case of the power set example by choosing an appropriate representation. For example, the vector `$(1, 0, 1)$` could be represented by the set of pairs `$\\{(1, 1), (2, 0), (3, 1)\\}$` and an access policy associated with the pattern `$(1, \\ast, 1)$` is represented by the set `$\\{(1, 1), (3, 1)\\}$` i.e. the pairs associated with wildcards in the pattern are excluded and hence, the latter set is a subset of the former. Then the `$\\odot$` operation naturally corresponds to set intersection. However this semilattice has no natural dual that is meaningful in the context of binary vector matching. Therefore, this attribute semilattice is not a lattice.\n\nAnother well-known example of a semilattice is the set of finite words over an alphabet ordered by prefix where `$\\odot$` is defined as returning the prefix common to two words which may in fact be the empty word `$\\epsilon$`. This semilattice has a natural representation via sets akin to our representation for the previous example. An issue with this representation is that it permits sets corresponding to invalid attributes. For a binary alphabet, we define the set of representative sets associated with well-formed attributes as follows. Let `$U = \\{(i, b) \\in [n] \\times \\{0, 1\\}\\}$` and define `$V := \\{W \\subset U : (i, b) \\in W \\implies ((j, 0) \\in W) \\neq ((j, 1) \\in W)\\;\\;\\forall j \\in [i], i \\in [n], b \\in \\{0, 1\\}\\}$` (what is meant by `$((j, 0) \\in W) \\neq ((j, 1) \\in W)$` is that either `$(j, 0)$` or `$(j, 1)$` is in `$W$` but not both - i.e. exclusive OR) . Now `$V$` consists of the sets representing well-formed attributes, which are finite words in `$\\{w \\in \\{0, 1\\}^\\ast : |w| \\leq n\\}$`. Recall that the policies match prefixes, so an attribute associated with a certain word matches a policy if it contains the prefix associated with that policy. The structure we have is a tree rooted at the empty string `$\\epsilon$`. Every node corresponds to a minimal policy for the prefix obtained by the path from the root to that node. Every node is also associated with an attribute. The attribute at a node matches all policies on the path from the root to the node and the policy at a node matches all attributes at itself and its children. Now this semilattice is not bounded i.e. it does not have an identity element. But this makes it not suitable for ABGHE. We therefore have to add a logical identity element. Now if `$\\odot$` is defined as set intersection, a natural choice is the set `$U$`. While this is not a valid attribute (`$U \\notin V$`), it can be assigned a special meaning in applications. By the transitivity property of `$\\sim$`, it is easy to see that a single-rooted tree has a single identity element. If we exclude the empty string, the corresponding tree will have two disconnected components and there will be a partition of policies `$\\{F_0, F_1\\}$` and of attributes `$\\{A_0, A_1\\}$` where `$A_0$` will have identity element `$U \\setminus \\{(1, 1)\\}$` and `$A_1$` will have identity element `$U \\setminus \\{(1, 0)\\}$`.\n\nIt is worth pointing out that we can define `$a \\odot a'$` as the attribute associated with the lowest common ancestor of the node associated with `$a$` and the node associated with `$a'$`. For any node in the tree, the semilattice operation returns that node if the two operands are a node from its left subtree and a node from its right subtree (in any order). This is straightforward to define when we have such a tree structure.\n\nThis got me thinking about the general case; that is, for arbitrary semilattices, and wondering: is there a simple, equivalent graph-theoretic definition of the semilattice operation? Here is a simple definition that equivalently describes the semilattice operation from a graph theory viewpoint. Let `$G$` be the directed graph corresponding to the Hasse diagram for the partial order induced by the semilattice operation. Now let `$G'$` be the undirected version of `$G$` obtained by replacing every directed edge with an undirected edge. Suppose we wish to compute `$a \\odot a'$` for attribute `$a$` represented by vertex `$v$` and attribute `$a'$` represented by vertex `$v'$`. Then we can define `$a \\odot a'$` as the attribute associated with the vertex that intersects both a shortest path in `$G'$` from `$v$` to the inimum vertex and a shortest path in `$G'$` from `$v'$` to the infimum vertex and whose distance is greatest from the infimum vertex. Note that there may be more than one \"shortest\" path (hence, the slightly ambiguous wording) i.e. there may be several equal-length paths whose length is minimal, which we refer to as \"shortest\".\n\nInitially I was going to, at this point, talk a bit about vector encodings for inner product predicates (which could realize the above semilattice in a scheme such as KSW !cite{conf\/eurocrypt\/KatzSW08} for example) but I have decided to defer a discussion on these encodings to a later post. \n\n## Characterization of IND-AD-CCA1 Security\nWith the definition of `$\\tau$` above under our belt, we can proceed to an overview of the characterization of IND-AD-CCA1 security from my paper [on arXiv](https:\/\/arxiv.org\/pdf\/1302.1192.pdf). \n\nBefore we proceed, we first recall some concepts from abstract algebra that the reader may or may not be familiar with.\n!definition{def:normal_subgroup}{\nA normal subgroup `$N \\subseteq G$` of a group `$G$` satisfies the property that for all `$n \\in N$`, it holds that\n`$$g n g^{-1} \\in N\\;\\;\\;\\forall g \\in G.$$`\nIn other words, `$N$` remains invariant under conjugation.\n}\nIt is precisely the normal subgroups (and no other subgroups) that can be used to construct quotient groups. All subgroups of an abelian group are normal. THe ABGHE schemes we mostly make reference to associate an ableian group of ciphertexts with every policy and all subgroups thereof are therefore normal.\n!definition{def:rep}{\nLet `$f \\in \\mathbb{F}$` be policy and `$\\mathcal{C}_f \\subset \\hat{\\mathcal{C}}$` be a subgroup of the ciphertext space associated with `$f$`. Furthermore, let `$\\mathcal{N}_f \\subset \\mathcal{C}_f$` be a normal subgroup of `$\\mathcal{C}_f$`. Then a *set of representatives* `$\\mathcal{R}_f \\subset \\mathcal{C}_f$` is a set of ciphertexts where each one is a unique representative of an equivalence class in the quotient group `$\\mathcal{C}_f \/ \\mathcal{N}_f$`. For every `$r \\in \\mathcal{R}_f$`, the corresponding equivalence class is `$\\{rn \\in \\mathcal{C}_f : n \\in \\mathcal{N}_f\\}$`.\n}\n\nI'm now going to discuss an abstraction that is defined in !cite{journals\/dcc\/ArmknechtKP13} for the public-key setting and an informal description is provided in the [full version on arXiv](https:\/\/arxiv.org\/pdf\/1302.1192.pdf) of !cite{conf\/africacrypt\/ClearHT13} in the context of predicate encryption, but that treatment is relevent for the attribute-based setting since no use is made of attribute privacy. The abstraction is called GIFT (Generic shIFT-Type) from Defintion 3 in !cite{journals\/dcc\/ArmknechtKP13}. I'm going to repeat the informal description from the full version of my Africacrypt 2013 paper, specifically generalizing to the attribute-based setting and correcting for any small variations in notation. As an aside, I'm not too happy with Appendix A of my paper on arXiv - the writing is densely-packed and no space is afforded to intuitions and gentle discussion. So now let us discuss the properties of a GIFT scheme in the context of ABGHE. We list each of the requirements as follows:\n\n* The set of policies `$\\mathbb{F}$` is *complete* as per Definition !ref{def:complete}.\n* The public parameters `$\\mathsf{PP}$` contains information to determine a non-trivial, proper normal subgroup `$\\mathcal{N}_f$` of every group `$\\mathcal{C}_f$`. Intuitively, `$\\mathcal{N}_f$` consists of the honest encryptions of the identity element of the message space `$\\mathcal{M}$` (say zero) under attribute `$a$`, the unique attribute such that `$\\tau(a) = f$`.\n* It holds that for every `$f, g \\in \\mathbb{F}$`, the systems of representatives `$\\mathcal{R}_f = \\mathcal{C}_f \/ \\mathcal{N}_f$` and `$\\mathcal{R}_g = \\mathcal{C}_g \/ \\mathcal{N}_g$` have the same cardinality; that is, `$|\\mathcal{R}_f| = |\\mathcal{R}_g|$`.\n* `$\\mathsf{PP}$` contains an efficient function `$\\psi : \\mathbb{F} \\times \\mathcal{M} \\to \\hat{\\mathcal{C}}$` with the property that `$\\psi_f = \\psi(f, \\cdot)$` for any `$f \\in \\mathbb{F}$` is an isomorphism between `$\\mathcal{M}$` and `$\\mathcal{R}_f$`.\n* To encrypt a message `$m \\in \\mathcal{M}$` under attribute `$a \\in \\mathbb{A}$`, perform the following steps:\n1. compute `$f \\gets \\tau(a)$`.\n2. choose a random `$n \\stackrel{\\$}{\\gets} \\mathcal{N}_f$`.\n3. output ciphertext `$\\psi_f(m) \\ast n \\in \\mathcal{C}_f$`.\n* A secret key for policy `$f \\in \\mathbb{F}$` contains an efficient description of `$\\psi_f^{-1} \\circ \\mu_f$` where `$\\mu_f : \\hat{\\mathcal{C}}_f \\to \\mathcal{R}_f$` such that `$r = \\mu_f(c)$` is the unique representative with `$c = r \\ast n$` where `$n \\in \\mathcal{N}_f$`.\n\nThe next thing we want to do is to identify conditions for a GIFT scheme to satisfy IND-AD-CCA1 security. Theorem 3 in the full version of my Africacrypt paper on arXiv shows that a GIFT ABGHE scheme is IND-AD-CCA1 secure if and only if a problem called ISOAP is hard. So what is this ISOAP problem?\n\nISOAP (Interactive Splitting Oracle-Assisted (Subgroup Membership) Problem) is a subgroup membership problem with access to a *splitting oracle*, which we will define momentarily. The goal is to determine whether an element `$c \\in \\mathcal{C}_f$` is a member of `$\\mathcal{N}_f \\subset \\mathcal{C}_f$`. Queries can be made to the splitting oracle before the challenge is received. A splitting oracle for a group `$\\mathcal{C}_f$` with normal subgroup `$\\mathcal{N}_f$` answers with `$(r, n)$` when queried with an element `$c \\in \\mathcal{C}_f$` such that `$c = r \\ast n$`. In fact, it is sufficient for the oracle to return `$r$` since `$n$` is trivially obtained by computing `$c \\ast r^{-1}$`. The splitting oracle corresponds to the decryption oracle in the IND-AD-CCA1 security game where the target attribute is `$a \\in \\mathbb{A}$` and we have that `$f = \\tau(a) \\in \\mathbb{F}$`. Theorem 3 in the full version of my paper on arXiv shows that the hardness of ISOAP and IND-AD-CCA1 security are equivalent for a GIFT PE scheme, but since the proof does not rely on attribute privacy, the equivalence also holds for a GIFT ABGHE scheme. KSW !cite{conf\/eurocrypt\/KatzSW08} is an example of a GIFT ABGHE scheme with respect to several attribute semilattices and classes of policies such as those discussed above and in !cite{ClearM17}, but it has not been shown (to the best of our knowledge) to be CCA1 secure.","timestamp":1602170063,"edited":1602170063},{"updates":[],"title":"Equivalence Between Integer Comparison and Disjointness Testing of \"Prefix\" Sets","slug":"comp","summary":"comp","image":"","tags":["comp"],"content":"*The idea in this post is due to Dov Gordon who shared with me in late 2016 at George Mason University in connection with some crypto work we were doing. Dov has allowed me to share it here. It might be something that is already out there; I'm not sure, but since I found it particularly interesting and quite relevant to the crypto work we were\/are doing, I will describe it here. First I will give some formal definitions, then prove the theorem which is the crux of this post, and finally I will include some Python code that illustrates the idea.<\/p>\n

*\n***1. <\/b>The function **`$\\mathsf{bin}_m : \\mathbb{N}_0 \\to \\{0, 1\\}^{m}$<\/code> indexed by an integer ``$m$<\/code> maps a nonnegative integer to a a binary string of length ``$m$<\/code> corresponding to its ``$m$<\/code>-bit binary representation and is defined as ``$\\mathsf{bin}_m(x) = b_{m - 1}b_{m - 2}\\cdots b_0$<\/code> where ``$b_i := \\lfloor x \/ 2^i \\rfloor \\mod{2}$<\/code>.\n<\/div>\n`

\n**2. <\/b>The function **`$\\mathsf{int} : \\{0, 1\\}^+ \\to \\mathbb{N}_0$<\/code> maps a non-empty binary string to a nonnegative integer corresponding to the numerical value represented by the binary string and is defined as ``$\\mathsf{int}(b_{k - 1} b_{k - 2} \\cdots b_0) \\mapsto \\sum_{0 \\leq i < k} b_i \\cdot 2^{i}$<\/code>.\n<\/div>\n\n`**3. <\/b>The ***prefix set<\/em> associated with a nonnegative integer is the set containing all non-empty substrings of its corresponding binary string. For a nonnegative *`$n$<\/code>-bit integer ``$x \\in \\mathbb{N}_0$<\/code>, the prefix set ``$P_x$<\/code> is defined as ``$$P_x := \\{b_{n}\\cdots b_{n - i} \\in \\{0, 1\\}^{+} : 0 \\leq i \\leq n\\}$$<\/code> where ``$b_nb_{n - 1}\\cdots b_0 = \\mathsf{bin}_{n + 1}(x)$<\/code>.\nThe set of prefix sets is denoted by ``$\\mathbb{P} := \\{P_x : x \\in \\mathbb{N}_0, x < 2^n\\}$<\/code> and the function ``$\\mathsf{pre} : \\mathbb{N}_0 \\to \\mathbb{P}$<\/code> is a function that maps nonnegative integers to their associated prefix set; that is, ``$\\mathsf{pre}(x) = P_x$<\/code>, where ``$P_x$<\/code> is defined as above for nonnegative integer ``$x$<\/code>.\n<\/div>\n\n`**4. <\/b>The function **`$\\mathsf{inc} : \\{0, 1\\}^+ \\to \\{0, 1\\}^+$<\/code> maps a binary string representing integer ``$x$<\/code> to the binary string representing integer ``$x + 1$<\/code> and is defined as\n``$\\mathsf{inc}(s) = \\mathsf{bin}_{|s|}(\\mathsf{int}(s) + 1)$<\/code> where the notation ``$|\\cdot|$<\/code> denotes the length of a binary string. By abuse of notation, we write ``$\\mathsf{inc}(P_x) = \\{\\mathsf{inc}(s) : s \\in P_x\\}$<\/code> to apply ``$\\mathsf{inc}$<\/code> to every prefix in a prefix set.\n<\/div>\n\n`**5. <\/b>The binary relation **`$\\preceq$<\/code> is defined on the set of prefix sets ``$\\mathbb{P}$<\/code> as follows:\n``$$P_x \\preceq P_y := P_x \\cap \\mathsf{inc}(P_y) = \\emptyset$$<\/code>.\n<\/div>\n\n`**1. <\/b>***The map *`$\\mathsf{pre}$<\/code> is a relation-preserving isomorphism between ``$(\\{x \\in \\mathbb{N}_0 : x < 2^n\\}, \\leq)$<\/code> and ``$(\\mathbb{P}, \\preceq)$<\/code>.<\/i>\n<\/div>\n\n`We first prove that for nonnegative integers `$x, y < 2^n$<\/code>: ``$x \\leq y \\implies P_x \\preceq P_y$<\/code>.\nFor the case of ``$x = y$<\/code>, both prefix sets are equal so adding one to the prefixes in one of the sets results in a disjoint set of prefixes. Now for the case ``$x < y$<\/code>. In this case, there exists an ``$i$<\/code> such that bit ``$i$<\/code> is the first bit from left to right where ``$x$<\/code> and ``$y$<\/code> differ and it will be one for ``$y$<\/code> and zero for ``$x$<\/code>. The set of prefixes with less than ``$i$<\/code> bits is equal for ``$P_x$<\/code> and ``$P_y$<\/code>, so adding one to the prefixes in this subset of ``$P_y$<\/code> will result in a subset disjoint from that of ``$P_x$<\/code>. Adding one to the set of prefixes in ``$P_y$<\/code> with ``$i$<\/code> or more bits results in a disjoint set to the set of prefixes in ``$P_x$<\/code> with ``$i$<\/code> or more bits. It remains to prove the other direction; that is, ``$P_x \\preceq P_y \\implies x \\leq y$<\/code>. If the prefix sets after adding one to the prefixes in ``$P_y$<\/code> are disjoint, there are two possibilities. First, the original sets ``$P_x$<\/code> and ``$P_y$<\/code> are equal, therefore ``$x = y$<\/code>. Secondly, the first bit that ``$x$<\/code> and ``$y$<\/code> differ from left to right, position ``$i \\geq 1$<\/code>, is one for ``$y$<\/code> and zero for ``$x$<\/code> as otherwise adding one to the prefix with ``$i$<\/code> bits such that bit ``$i$<\/code> is zero would result in a prefix that is equal to that in ``$P_x$<\/code> and therefore we would not have that ``$P_x \\preceq P_y$<\/code> (the sets would not be disjoint). Therefore, it follows that ``$x < y$<\/code> and we can conclude the proof.<\/p>\n<\/div>\n`

`Here is the code in python to generate the prefix sets and check the ``$\\preceq$<\/code> relation between them.<\/p>\n`

`def to_bin(v, nbits):\n bin_rep = [0] * nbits\n i = nbits - 1\n while v != 0 and i >= 0:\n bin_rep[i] = v % 2\n v \/= 2\n i -= 1\n return bin_rep\n\ndef from_bin_str(bstr):\n bits = map(int, list(bstr))\n i = len(bits) - 1\n v = 0\n power = 1\n while i >= 0:\n v += bits[i] * power\n power *= 2\n i -= 1\n return v\n\ndef add_one_to_prefix(prefix):\n nbits = len(prefix)\n v = from_bin_str(prefix) + 1\n if v > 2**nbits - 1:\n nbits += 1\n return ''.join(map(str, to_bin(v, nbits)))\n\ndef gen_prefixes(v, nbits):\n bin_rep_s = map(str, to_bin(v, nbits))\n prefixes = set()\n for i in range(nbits):\n prefix = ''.join(bin_rep_s[:i + 1])\n prefixes.add(prefix)\n return prefixes\n\ndef lte_prefix(a, b, nbits):\n a_prefixes = gen_prefixes(a, nbits)\n b_prefixes = gen_prefixes(b, nbits)\n new_b_prefixes = set()\n for prefix in b_prefixes:\n new_b_prefixes.add(add_one_to_prefix(prefix))\n\n return len(a_prefixes & new_b_prefixes) == 0<\/code><\/pre>\n`To support `$n$<\/code> bits, set the variable ``$\\mathsf{nbits}$<\/code> to ``$n + 1$<\/code>, call the function ``$\\mathsf{lte\\_prefix}$<\/code> to evaluate comparision between two integers which is implemented by using the ``$\\preceq$<\/code> relation.<\/p>","raw":"The idea in this post is due to Dov Gordon who shared with me in late 2016 at George Mason University in connection with some crypto work we were doing. Dov has allowed me to share it here. It might be something that is already out there; I'm not sure, but since I found it particularly interesting and quite relevant to the crypto work we were\/are doing, I will describe it here. First I will give some formal definitions, then prove the theorem which is the crux of this post, and finally I will include some Python code that illustrates the idea.\n\n!definition{def:bin}{The function `$\\mathsf{bin}_m : \\mathbb{N}_0 \\to \\{0, 1\\}^{m}$` indexed by an integer `$m$` maps a nonnegative integer to a a binary string of length `$m$` corresponding to its `$m$`-bit binary representation and is defined as `$\\mathsf{bin}_m(x) = b_{m - 1}b_{m - 2}\\cdots b_0$` where `$b_i := \\lfloor x \/ 2^i \\rfloor \\mod{2}$`.\n}\n\n!definition{def:int}{The function `$\\mathsf{int} : \\{0, 1\\}^+ \\to \\mathbb{N}_0$` maps a non-empty binary string to a nonnegative integer corresponding to the numerical value represented by the binary string and is defined as `$\\mathsf{int}(b_{k - 1} b_{k - 2} \\cdots b_0) \\mapsto \\sum_{0 \\leq i < k} b_i \\cdot 2^{i}$`.\n}\n\n!definition{def:prefixset}{The *prefix set* associated with a nonnegative integer is the set containing all non-empty substrings of its corresponding binary string. For a nonnegative `$n$`-bit integer `$x \\in \\mathbb{N}_0$`, the prefix set `$P_x$` is defined as `$$P_x := \\{b_{n}\\cdots b_{n - i} \\in \\{0, 1\\}^{+} : 0 \\leq i \\leq n\\}$$` where `$b_nb_{n - 1}\\cdots b_0 = \\mathsf{bin}_{n + 1}(x)$`.\nThe set of prefix sets is denoted by `$\\mathbb{P} := \\{P_x : x \\in \\mathbb{N}_0, x < 2^n\\}$` and the function `$\\mathsf{pre} : \\mathbb{N}_0 \\to \\mathbb{P}$` is a function that maps nonnegative integers to their associated prefix set; that is, `$\\mathsf{pre}(x) = P_x$`, where `$P_x$` is defined as above for nonnegative integer `$x$`.\n}\n\n!definition{def:inc}{The function `$\\mathsf{inc} : \\{0, 1\\}^+ \\to \\{0, 1\\}^+$` maps a binary string representing integer `$x$` to the binary string representing integer `$x + 1$` and is defined as\n`$\\mathsf{inc}(s) = \\mathsf{bin}_{|s|}(\\mathsf{int}(s) + 1)$` where the notation `$|\\cdot|$` denotes the length of a binary string. By abuse of notation, we write `$\\mathsf{inc}(P_x) = \\{\\mathsf{inc}(s) : s \\in P_x\\}$` to apply `$\\mathsf{inc}$` to every prefix in a prefix set.\n}\n\n!definition{def:relation}{The binary relation `$\\preceq$` is defined on the set of prefix sets `$\\mathbb{P}$` as follows:\n`$$P_x \\preceq P_y := P_x \\cap \\mathsf{inc}(P_y) = \\emptyset$$`.\n}\n\n!theorem{th:isom}{The map `$\\mathsf{pre}$` is a relation-preserving isomorphism between `$(\\{x \\in \\mathbb{N}_0 : x < 2^n\\}, \\leq)$` and `$(\\mathbb{P}, \\preceq)$`.}\n!proof{\nWe first prove that for nonnegative integers `$x, y < 2^n$`: `$x \\leq y \\implies P_x \\preceq P_y$`. \nFor the case of `$x = y$`, both prefix sets are equal so adding one to the prefixes in one of the sets results in a disjoint set of prefixes. Now for the case `$x < y$`. In this case, there exists an `$i$` such that bit `$i$` is the first bit from left to right where `$x$` and `$y$` differ and it will be one for `$y$` and zero for `$x$`. The set of prefixes with less than `$i$` bits is equal for `$P_x$` and `$P_y$`, so adding one to the prefixes in this subset of `$P_y$` will result in a subset disjoint from that of `$P_x$`. Adding one to the set of prefixes in `$P_y$` with `$i$` or more bits results in a disjoint set to the set of prefixes in `$P_x$` with `$i$` or more bits. It remains to prove the other direction; that is, `$P_x \\preceq P_y \\implies x \\leq y$`. If the prefix sets after adding one to the prefixes in `$P_y$` are disjoint, there are two possibilities. First, the original sets `$P_x$` and `$P_y$` are equal, therefore `$x = y$`. Secondly, the first bit that `$x$` and `$y$` differ from left to right, position `$i \\geq 1$`, is one for `$y$` and zero for `$x$` as otherwise adding one to the prefix with `$i$` bits such that bit `$i$` is zero would result in a prefix that is equal to that in `$P_x$` and therefore we would not have that `$P_x \\preceq P_y$` (the sets would not be disjoint). Therefore, it follows that `$x < y$` and we can conclude the proof.\n}\nHere is the code in python to generate the prefix sets and check the `$\\preceq$` relation between them.\n```python\ndef to_bin(v, nbits):\n bin_rep = [0] * nbits\n i = nbits - 1\n while v != 0 and i >= 0:\n bin_rep[i] = v % 2\n v \/= 2\n i -= 1\n return bin_rep\n\ndef from_bin_str(bstr):\n bits = map(int, list(bstr))\n i = len(bits) - 1\n v = 0\n power = 1\n while i >= 0:\n v += bits[i] * power\n power *= 2\n i -= 1\n return v\n\ndef add_one_to_prefix(prefix):\n nbits = len(prefix)\n v = from_bin_str(prefix) + 1\n if v > 2**nbits - 1:\n nbits += 1\n return ''.join(map(str, to_bin(v, nbits)))\n\ndef gen_prefixes(v, nbits):\n bin_rep_s = map(str, to_bin(v, nbits))\n prefixes = set()\n for i in range(nbits):\n prefix = ''.join(bin_rep_s[:i + 1])\n prefixes.add(prefix)\n return prefixes\n\ndef lte_prefix(a, b, nbits):\n a_prefixes = gen_prefixes(a, nbits)\n b_prefixes = gen_prefixes(b, nbits)\n new_b_prefixes = set()\n for prefix in b_prefixes:\n new_b_prefixes.add(add_one_to_prefix(prefix))\n \n return len(a_prefixes & new_b_prefixes) == 0\n```\nTo support `$n$` bits, set the variable `$\\mathsf{nbits}$` to `$n + 1$`, call the function `$\\mathsf{lte\\_prefix}$` to evaluate comparision between two integers which is implemented by using the `$\\preceq$` relation.\n","timestamp":1594891599,"edited":1594891599},{"updates":[],"title":"Open Problem (Theoretical Cryptography): Simultaneously Achieving Both Anonymity and Homomorphism in Additively-Homomorphic IBE","slug":"anonprob","summary":"anonprob","image":"","tags":["anonprob"],"content":"`

`Before I delve into this open problem, I want to mention that I'm happy I have the new features that I've recently added to this blog (see my previous two posts) available to me such as LaTeX-style theorem enviornments and bibliography\/references because I think I'm going to need them here, although I'm just going to start writing and see what happens.<\/p>\n`

`Now it is worth pointing out that the open problem here is most likely not of much general interest, even in cryptography, since it is very narrowly-focused and theoretical. It relates to a primitive known as identity-based group homomrophic encryption (IBGHE) which is defined in my PKC paper [1] along with some of my other papers such as [2] and [3]. Basically, IBGHE is identity-based encryption that is homomorphic for some group operation and the ciphertext space for every identity forms a group. Moreover, the decryption function is a group homomorphism between the ciphertext group and the plaintext group. The aforementioned papers describe some of the applications of group homomorphic encryption (GHE).<\/p>\n`

`It is an open problem to construct an IBGHE that is simultaneously anonymous and homomorphic for addition. There are only two IBGHE schemes that support modular addition to the best of my knowledge, namely my XOR-homomorphic variant of the Cocks IBE scheme in [3] and my IBGHE scheme from [1] that is homomorphic for addition modulo smooth square-free integers. Now Joye discovered that the Cocks IBE scheme itself is XOR-homomorphic [4] but the scheme is not an IBGHE since the ciphertext space with the homomorphic operation forms a quasigroup and not a group. Some readers might wonder about schemes that are considered multiplicatively homomorphic, which allow addition in the exponent, and question why we do not classify them as IBGHE schemes for addition. The reason is that the corresponding additive group has exponential order and decryption can only recover messages using Pollard's lambda algorithm that are less than some polynomial bound so the `*valid<\/em> message space does not form an additive group. Now the two IBGHE schemes supporting modulo addition that we are aware of are not anonymous (see my post<\/a> on breaking anonymity in the scheme from [1]). But there are variants of these schemes that achieve anonymity. However, although these schemes gain anonymity, they lose the homomorphic property. Most usually, we need to know the identity associated with a ciphertext in order to correctly compute the homomorphic operation and so when the identity is hidden to us as it is when the scheme is anonymous, we cannot compute the homomorphic operation. So the open problem here in a nutshell is to construct an IBGHE for addition that is anonymous while retaining the homomorphic operation. This is mainly of theoretical interest.<\/p>\n*

`References<\/h2>\n`

\n- \n
*Additively Homomorphic IBE from Higher Residuosity<\/i>. Michael Clear and Ciaran McGoldrick. Public Key Cryptography. 2019. \n<\/li>\n* *\n**A Note on Attribute-Based Group Homomorphic Encryption<\/i>.Michael Clear and Ciaran McGoldrick. Cryptology ePrint Archive, Report 2017\/752. 2017. \n<\/li>\n**\n**Homomorphic Encryption with Access Policies: Characterization and New Constructions<\/i>. Michael Clear, Arthur Hughes and Hitesh Tewari. AFRICACRYPT. 2013. \n<\/li>\n**\n**Identity-Based Cryptosystems and Quadratic Residuosity.<\/i>. Marc Joye. Public Key Cryptography (1). 2016. \n<\/li>\n<\/ol>\n","raw":"Before I delve into this open problem, I want to mention that I'm happy I have the new features that I've recently added to this blog (see my previous two posts) available to me such as LaTeX-style theorem enviornments and bibliography\/references because I think I'm going to need them here, although I'm just going to start writing and see what happens.\n\nNow it is worth pointing out that the open problem here is most likely not of much general interest, even in cryptography, since it is very narrowly-focused and theoretical. It relates to a primitive known as identity-based group homomrophic encryption (IBGHE) which is defined in my PKC paper !cite{conf\/pkc\/ClearM19} along with some of my other papers such as !cite{ClearM17} and !cite{conf\/africacrypt\/ClearHT13}. Basically, IBGHE is identity-based encryption that is homomorphic for some group operation and the ciphertext space for every identity forms a group. Moreover, the decryption function is a group homomorphism between the ciphertext group and the plaintext group. The aforementioned papers describe some of the applications of group homomorphic encryption (GHE).\n\nIt is an open problem to construct an IBGHE that is simultaneously anonymous and homomorphic for addition. There are only two IBGHE schemes that support modular addition to the best of my knowledge, namely my XOR-homomorphic variant of the Cocks IBE scheme in !cite{conf\/africacrypt\/ClearHT13} and my IBGHE scheme from !cite{conf\/pkc\/ClearM19} that is homomorphic for addition modulo smooth square-free integers. Now Joye discovered that the Cocks IBE scheme itself is XOR-homomorphic !cite{conf\/pkc\/Joye16} but the scheme is not an IBGHE since the ciphertext space with the homomorphic operation forms a quasigroup and not a group. Some readers might wonder about schemes that are considered multiplicatively homomorphic, which allow addition in the exponent, and question why we do not classify them as IBGHE schemes for addition. The reason is that the corresponding additive group has exponential order and decryption can only recover messages using Pollard's lambda algorithm that are less than some polynomial bound so the *valid* message space does not form an additive group. Now the two IBGHE schemes supporting modulo addition that we are aware of are not anonymous (see [my post](https:\/\/scss.tcd.ie\/~mclear\/blog?post=anon) on breaking anonymity in the scheme from !cite{conf\/pkc\/ClearM19}). But there are variants of these schemes that achieve anonymity. However, although these schemes gain anonymity, they lose the homomorphic property. Most usually, we need to know the identity associated with a ciphertext in order to correctly compute the homomorphic operation and so when the identity is hidden to us as it is when the scheme is anonymous, we cannot compute the homomorphic operation. So the open problem here in a nutshell is to construct an IBGHE for addition that is anonymous while retaining the homomorphic operation. This is mainly of theoretical interest.\n\nI would be satisified with any progress in this direction. Even when indistinguishability obfuscation (iO) is employed, we still do not know how to solve this problem. In fact, it is proved in my paper !cite{ClearM17} that assuming iO, given public-key GHE for some group, we get IBGHE for the same group. However, the transformation does not give us anonymity. I have an outline of a paradigm for solving this problem that makes use of the result in !cite{ClearM17} from iO. The paradigm I have in mind is as follows. As part of the public parameters, we have an obfuscated program that maps an identity to a public key in some *multi-user* system with public parameters. The public keys in a *multi-user* system share the same set of common public parameters - think of the generator `$g$` and modulus `$p$` in ElGamal as the common public parameters, except ElGamal is of no use here since it is only multiplicatively homomorphic. Nevertheless, ElGamal serves to illustrate another property that this paradigm requires, namely that the multi-user system supports key privacy where key privacy can be viewed as the analog to anonymity in the identity-based setting; that is, the ciphertexts in the multi-user system do not reveal the public key they are associated with, which is the case in ElGamal. I'm using the term multi-user system in a broad sense here permitting both the case where we have a trusted authority and the case where we do not. In the former, the public parameters are generated by a trusted authority with a backdoor (master secret key) such that the trusted authority can decrypt any ciphertext. In our paradigm, the public parameters of the multi-user system will be generated by the trusted authority of the IBE scheme and published as part of the IBE scheme's public parameters. So we need the multi-user system to be key-private and additively homomorphic, where the homomorphic operation can be computed without knowing the public key associated with a ciphertext. Then additionally assuming iO, we get a solution to the problem, albeit with strong assumptions.","timestamp":1594324510,"edited":1594324510},{"updates":[{"title":"Code available on GitHub","slug":"uberlight#2","summary":"github","image":"","tags":["uberlight"],"content":"*The code for uberlight is now available here<\/a>.<\/p>","raw":"The code for uberlight is now available [here](https:\/\/github.com\/ciphron\/hyperlight).\n","timestamp":1589529783,"edited":1589529783},{"title":"Example","slug":"uberlight#1","summary":"uberlight example","image":"","tags":["uberlight"],"content":"

Uberlight is fork of a minimalistic flat-file blog engine written in PHP called hyperlight, which you can find on GitHub<\/a>. My desire for my own blog was to even have a more minimalistic engine, so I forked hyperlight and stripped down some of the features, changed others and added more to suit my needs. Although the theme can be easily changed, my particular taste favors a simple style as you can see here. But even after stripping down hyperlight to a certain extent, there were features I wanted to add. So in this post, I describe some of the new features and how to use them. Firstly though, I will briefly describe how posts are written in uberlight, preserved exactly from hyerlight.<\/p>\n

# Math Expressions in TeX<\/h1>\nWe would like to write math expressions in TeX. I use the MathJax Javascript library to render mathematical notation written in TeX for display in the browser. So to write a TeX math expression inline in the markdown file, we use backticks to enclose it in a code block, and then within the backticks we write the TeX expression using the usual delimiter, namely a $ symbol. For example, consider the following sentence.<\/p>\n

Let `$n \\in \\mathbb{Z}^+$<\/code> be a positive integer.<\/p>\n`

`Now the markdown to produce the above is:<\/p>\n`

`Let `$n \\in \\mathbb{Z}^+$` be a positive integer.<\/code><\/pre>\n`

However there is still some work to do. The code blocks within the produced HTML mean that MathJax will ignore the TeX. Furthermore, the dollar sign delimiters need to be changed to \\ ( and \\ ). Note there is actually no space between the backslash and the parenthesis but here we need top stop MathJax processing the text. Luckily, without having to reinvent the wheel, Yihui Xie solved this problem in his blog post here<\/a> with a piece of Javascript that preprocesses the code tags, changes the delimiters for the TeX expressions and then removes the code tags. So I use his code in uberlight, and you can find it here<\/a>. With this script added, we can comfortably use TeX expressions in our Markdown file, like we have done above.<\/p>\n

# Updates<\/h1>\nOne feature I wanted to add was a means to extend existing blog posts with updates, which can be added over time and appear at the end of the post. The way I accomplish this in uberlight is with additional files in the posts directory with specific names. Suppose you have post labeled 'example' such that there is a Markdown file in the posts directory with the name example.md. Now suppose we wish to add an update to such a post. To do this, we add a file whose name is the label of the post (in this case, 'example') followed by a hash symbol and an arbitrary string (which might describe the particular update) - then naturally the '.md' extension. This file has the same form as a post and the content is written in Markdown. An update will then be added to the post. The updates are ordered by modification date of the files, starting with the oldest towards the most recent. To illustrate this, I have added an update to this post by writing a file called 'uberlight#1.md' to the posts directory and you can see the result below at the end of the main post.<\/p>\n

# Extended Markdown<\/h1>\n

One of the motivations for developing an extension to Markdown was a desire to have a way to automatically and easily add theorems and proofs that are styled in a similar manner to LateX. Such 'environments' are not supported by MathJax, which only deals with math expressions. Furthermore, I thought it would be nice for the theorems etc. to be numbered automatically with an easy way to reference them throughout a post. For my extension to Markdown, read my post describing it here<\/a>.<\/p>\n

The code for the ExtMarkdown parser (written in PHP) is available now on GitHub as part of the uberlight blog engine. See the repository here<\/a>.<\/p>","raw":"The code for the ExtMarkdown parser (written in PHP) is available now on GitHub as part of the uberlight blog engine. See the repository [here](https:\/\/github.com\/ciphron\/hyperlight).\n","timestamp":1589530199,"edited":1589530199}],"title":"Extension to Markdown Supporting LateX-Style Theorem Environments, References and Citations","slug":"extmarkdown","summary":"extmarkdown","image":"","tags":["extmarkdown"],"content":"

# Extended Markdown - New Commands<\/h1>\nOne of the motivations for developing an extension to Markdown was a desire to have a way to automatically and easily add theorems and proofs that are styled in a similar manner to LateX. Such 'environments' are not supported by MathJax, which only deals with math expressions. Furthermore, I thought it would be nice for the theorems etc. to be numbered automatically with an easy way to reference them throughout a post. To accommodate such new features, I chose a syntax to specify commands in a Markdown file. A command is signalled by an exclamation (!) mark followed by the alphanumeric name of the command. If the command takes arguments, each argument is enclosed in braces and follows immediately after the name of the command (this is in fact modelled after LateX). So to invoke the theorem command, take a look at the code below. The arguments of the theorem command are first, the label of the theorem (so that it can easily be referenced) and second, the content of the theorem. So we write:<\/p>\n

`!theorem{th:primes}{There are an infinite number of primes.}<\/code><\/pre>\n`It is rendered as<\/p>\n

`\n`**1. <\/b>***There are an infinite number of primes.<\/i>\n<\/div>\n*I call this extension to Markdown, rather lazily, ExtMarkdown, which is not very unique considering the many extensions to Markdown out there. Also we have commands for definitions, lemmas and proofs. Then there is a reference (!ref) command whose only argument is the label of the theorem, lemma or definition to be referenced. Here is some elementary mathematical content that serves to illustrate the commands.<\/p>\n

*\n***1. <\/b>A semigroup consists of a set together with a binary operation satisfying the following properties.**\n- The set is closed under the binary operation.<\/li>\n
- The binary operation is associative.<\/li>\n<\/ol>\n<\/div>\n
Now we have the following theorem.<\/p>\n

\n**2. <\/b>***The set *`$S := \\{x^2 + y^2 : x, y \\in \\mathbb{Z}^+\\}$<\/code> together with the binary operation of integer multiplication is a semigroup.<\/i>\n<\/div>\n`

`\nAssociativity follows immediately from that of integer multiplication. However, we must show that the set is closed under multiplication. This can be seen as follows ``$$(x_1^2 + y_1^2)\\cdot(x_2^2 + y_2^2) = x_1^2x_2^2 + x_1^2y_2^2 + x_2^2 y_1^2 + y_1^2y_2^2 = x_1^2x_2^2 + x_1^2y_2^2 + x_2^2y_1^2 + y_1^2y_2^2 + 2x_1x_2y_1y_2 - 2x_1x_2y_1y_2 = (x_1x_2 + y_1y_2)^2 + (x_1y_2 - x_2y_1)^2$$<\/code>. Therefore, ``$(S, \\ast)$<\/code> is a semigroup as it satisfies the properties of Definition 1.\n<\/div>\n`Note that we used a '!ref' command in the proof of Theorem 2 to reference Definition 1 (and twice in this very sentence). See the raw ExtMarkdown by following the link at the end of the post.<\/p>\n

The CSS I use for the theorem etc. environments is from a post by Zach Harmony<\/a>.<\/p>\n

# Citations<\/h1>\nAnother useful command that I've added is '!cite' for citation of papers etc. It takes one argument, namely a BibTeX key, which locates an entry in a BibTeX bibliography file that is specified in the uberlight configuration. A list of references is automatically generated and appended to the end of the post. Updates can cite articles that are cited in the main post, and if they make citations to articles not cited in the main post, a list of 'further' references is appended to the update. To illustrate citations, I will now arbitrarily cite [1].<\/p>\n

Note as of writing, I have not as yet put the code on GitHub. I will do this very soon and add an update to this post to say that it has been made available.<\/p>\n

## References<\/h2>\n\n- \n
*Scargos: Towards Automatic Vulnerability Distribution<\/i>. Florian Rhinow and Michael Clear. SECRYPT. 2015. \n<\/li>\n<\/ol>\n","raw":"\n# Extended Markdown - New Commands\nOne of the motivations for developing an extension to Markdown was a desire to have a way to automatically and easily add theorems and proofs that are styled in a similar manner to LateX. Such 'environments' are not supported by MathJax, which only deals with math expressions. Furthermore, I thought it would be nice for the theorems etc. to be numbered automatically with an easy way to reference them throughout a post. To accommodate such new features, I chose a syntax to specify commands in a Markdown file. A command is signalled by an exclamation (!) mark followed by the alphanumeric name of the command. If the command takes arguments, each argument is enclosed in braces and follows immediately after the name of the command (this is in fact modelled after LateX). So to invoke the theorem command, take a look at the code below. The arguments of the theorem command are first, the label of the theorem (so that it can easily be referenced) and second, the content of the theorem. So we write:\n```markdown\n!theorem{th:primes}{There are an infinite number of primes.}\n```\nIt is rendered as\n!theorem{th:primes}{There are an infinite number of primes.}\n\nI call this extension to Markdown, rather lazily, ExtMarkdown, which is not very unique considering the many extensions to Markdown out there. Also we have commands for definitions, lemmas and proofs. Then there is a reference (\\!ref) command whose only argument is the label of the theorem, lemma or definition to be referenced. Here is some elementary mathematical content that serves to illustrate the commands.\n\n!definition{def:semigroup}{A semigroup consists of a set together with a binary operation satisfying the following properties.\n1. The set is closed under the binary operation.\n2. The binary operation is associative.\n}\nNow we have the following theorem.\n!theorem{th:sum_two_squares}{\nThe set `$S := \\{x^2 + y^2 : x, y \\in \\mathbb{Z}^+\\}$` together with the binary operation of integer multiplication is a semigroup.\n}\n!proof{\nAssociativity follows immediately from that of integer multiplication. However, we must show that the set is closed under multiplication. This can be seen as follows `$$(x_1^2 + y_1^2)\\cdot(x_2^2 + y_2^2) = x_1^2x_2^2 + x_1^2y_2^2 + x_2^2 y_1^2 + y_1^2y_2^2 = x_1^2x_2^2 + x_1^2y_2^2 + x_2^2y_1^2 + y_1^2y_2^2 + 2x_1x_2y_1y_2 - 2x_1x_2y_1y_2 = (x_1x_2 + y_1y_2)^2 + (x_1y_2 - x_2y_1)^2$$`. Therefore, `$(S, \\ast)$` is a semigroup as it satisfies the properties of Definition !ref{def:semigroup}.\n}\n\nNote that we used a '\\!ref' command in the proof of Theorem !ref{th:sum_two_squares} to reference Definition !ref{def:semigroup} (and twice in this very sentence). See the raw ExtMarkdown by following the link at the end of the post.\n\nThe CSS I use for the theorem etc. environments is from a [post by Zach Harmony](http:\/\/drz.ac\/2013\/01\/17\/latex-theorem-like-environments-for-the-web\/).\n\n# Citations\nAnother useful command that I've added is '\\!cite' for citation of papers etc. It takes one argument, namely a BibTeX key, which locates an entry in a BibTeX bibliography file that is specified in the uberlight configuration. A list of references is automatically generated and appended to the end of the post. Updates can cite articles that are cited in the main post, and if they make citations to articles not cited in the main post, a list of 'further' references is appended to the update. To illustrate citations, I will now arbitrarily cite !cite{conf\/secrypt\/RhinowC15}.\n\nNote as of writing, I have not as yet put the code on GitHub. I will do this very soon and add an update to this post to say that it has been made available.","timestamp":1586676512,"edited":1586676512},{"updates":[{"title":"Update to geom2d.Plane Interface","slug":"flower#1","summary":"flower","image":"","tags":["flower"],"content":"*Since this post was written, I have modified the geom2d.Plane interface on GitHub so there are some minor differences between the current interface and the code in the post that uses it. Also, a new implementation of geom2d.Plane for Tkinter has been added. See the repositories on GitHub.<\/p>","raw":"Since this post was written, I have modified the geom2d.Plane interface on GitHub so there are some minor differences between the current interface and the code in the post that uses it. Also, a new implementation of geom2d.Plane for Tkinter has been added. See the repositories on GitHub.","timestamp":1583635922,"edited":1583635922}],"title":"Overlapping Circles and the Flower of Life","slug":"flower","summary":"flower","image":"","tags":["flower"],"content":"

My intention in this post is to discuss how to alogorithmically draw some geometric patterns that I find strikingly beautiful including the well-known Flower of Life. I will not comment on any significance such patterns may hold for certain people or how they have been used and interpreted. The focus here is on geometry and code. Once again, like my most recent posts, we will be using Python. First though I want to introduce a helpful building block that simplifies the process and allows us to focus more on the higher-level aspects.<\/p>\n

## The geom2d module<\/h2>\n

I have written a simple module called 'geom2d' and put it on GitHub<\/a>. At the heart of geom2d is an abstraction called a Plane that allows you to draw basic geometric shapes to a 2D surface using floating-point coordinates. Here is the interface.<\/p>\n

`class Plane(object):\n def __init__(self, width, height):\n self._width = width\n self._height = height\n\n @property\n def width(self):\n return self._width\n\n @property\n def height(self):\n return self._height\n\n def draw_point(self, point, color):\n pass\n\n def draw_line(self, start, end, color):\n pass\n\n def draw_circle(self, circle, color):\n pass\n\n # start and end are normalized angles on the unit interval\n # moving in an anticlockwise direction\n def draw_arc(self, center, radius, start, end, color):\n pass<\/code><\/pre>\n`

The GitHub respository<\/a> contains some useful implementations of Plane. For vector graphics, there is SVGPlane. And for rasters, there is RasterPlane, which requires a Raster object (an implementation of Raster is provided: ImageRaster).<\/p>\n

## Tringular Lattice Form of Overlapping Circles<\/h2>\nNow we turn to the common triangular lattice form of overlapping circles. We have a sequence of patterns of concentric hexagonal rings of circles. Each pattern in the sequence has a complete hexagonal ring of circles and contains `$C(n)$<\/code> circles where the sequence ``$C(n)$<\/code> is 1, 1, 7, 19, 37, 61... (starting with ``$n = 0$<\/code>); that is, in general, we have ``$C(n) = 3n(n - 1) + 1$<\/code>. So how do we draw such patterns?<\/p>\n`

`The construction is relatively straightforward. We begin with a center point ``$(c_x, c_y)$<\/code> and radius ``$r$<\/code> for the pattern. The circles then have radius ``$r' = r \/ (n - 1)$<\/code>. Naturally, we have to handle the case ``$n \\in \\{0, 1\\}$<\/code> separately and return a single circle centered at the given center point with radius ``$r$<\/code>. For ``$n >= 2$<\/code>, here is how we proceed. Note that we omit specifying the radius for the circles since each circle we draw will implicitly have radius ``$r'$<\/code> We start off with two circles, the first with center ``$(c_x, c_y)$<\/code> and the second with center ``$(c_x, c_y - r')$<\/code>. So our list of circles contains both of these. Then we rely on a crucial function called perform_intersections. This function takes a list of circles, a centre ``$(c'_x, c'_y)$<\/code> and a bounding radius ``$b$<\/code>. It finds the intersection points of every circle in the list with every other circle in the list and if the distance between each point and ``$(c'_x, c'_y)$<\/code> is less than or equal to ``$b$<\/code>, it adds a circle to the list centered on that point. Then the process is repeated with the new circles until no more circles can be added. Here is the code, which admittedly needs some tidying up and optmization, but you will get a sketch of the algorithm.<\/p>\n`

`def perform_intersections(circles, center, radius):\n if len(circles) == 0:\n return\n\n circle_radius = circles[0].radius\n points = set(map(lambda c: c.center, circles))\n index = 1\n\n while index < len(circles):\n sel_circle = circles[index]\n for i in range(index):\n other_circle = circles[i]\n inter = sel_circle.intersection(other_circle)\n for p in inter:\n pr = Point(round(p.x, PRECISION_DIGITS),\n round(p.y, PRECISION_DIGITS))\n if not is_near_identical(pr, points) \n and pr.distance_to(center) <= radius:\n circle = Circle(pr, circle_radius)\n circles.append(circle)\n points.add(pr)\n index += 1<\/code><\/pre>\n`Now we have completed the workhorse of the algorithm to draw these patterns. Our function to produce the patterns, which we name overlapping_circles, locates the pattern in local space with center `$(0, 0)$<\/code> and sets the radius of the circles to ``$1.0$<\/code> so that it can be easily scaled and translated as required. We proceed by adding our two initial circles to the list as described above with radius ``$1.0$<\/code> and then we call perform_intersections with thist list of circles, center point ``$(0, 0)$<\/code> and bounding radius ``$n - 1$<\/code> (plus some epsilon to take care of floating point errors). Here is the code.<\/p>\n`

`def overlapping_circles(n):\n if n < 0:\n return []\n\n if n == 0 or n == 1:\n return [Circle(Point(0, 0), 1.0)]\n\n init_circles = []\n center = Point(0, 0)\n radius = 1.0\n circle1 = Circle(center, radius)\n init_circles.append(circle1)\n\n circle2 = Circle(Point(center.x, center.y - radius), radius)\n init_circles.append(circle2)\n\n circles = list(init_circles)\n perform_intersections(circles, center, (n - 1) + FP_THRESHOLD))\n return circles<\/code><\/pre>\n`Observe that calling overlapping_circles with argument `$n$<\/code>, we obtain a list of circles ``$L_n$<\/code> such that ``$\\mathsf{len}(L_n) = C(n)$<\/code>. And we get the desired result. We can then iterate through the list of circles and draw each one to the Plane using the method Plane.draw_circle. Here is an example of an image generated by the program which shows overlapping circles for ``$n = 5$<\/code>.\n<\/p>\n`

`The Flower of Life<\/h2>\n`We have a lot of the hard work done. The function to create the flower of life is very simple. The fower of life consists of an outer circle with radius 3.0 in local space and 19 "full circles" inside the outer circle. Then the remaining circles we term "surrounding circles" and they intersect with the full circles and we have to draw arcs at the points of intersection inside the full circles. So put simply, the flower of life pattern in our code is a pair consisting of firstly a list of full circles, and secondly, a list of surrounding circles. We leverage overlapping_circles to obtain these circles. The full circles are obtained by calling overlapping_circles with `$n = 3$<\/code> and the surrounding circles are obtained by calling overlapping_circles with ``$n = 5$<\/code> and then filtering out the full circles. We note that it is easy to generalize this code and extend the pattern to a greater number of circles, but here we just stick to the common flower of life pattern. The code is simple.<\/p>\n`

`def create_flower_of_life():\n full_circles = overlapping_circles(3)\n all_circles = overlapping_circles(5)\n surr_circles = list(set(all_circles).difference(set(full_circles)))\n return (full_circles, surr_circles)<\/code><\/pre>\n`The next challenge is to draw these circles to the plane. The tricky part is the arcs. The first point I want to make is to recall the prototype of Plane.draw_arc. This function is passed a center point and a radius, which describes a particular circle, along with parameters called start and end. The parameters start and end are normalized angles in the interval `$[0, 1]$<\/code>. So 0 corresponds to zero radians and 1.0 corresponds to ``$2\\pi$<\/code> radians. And the function draws the arc in an anti-clockwise direction. Therefore, if we have start=0 and end=0.25, we will get an arc around the upper-right quadrant of the circle. However, if we have start=0.25 and end=0, we will get an arc around the other three quadrants of the circle.<\/p>\n`

`First though, we will look at the high-level code to draw the flower of life.<\/p>\n`

`def draw_flower_of_life2(plane, flower_pattern, center, radius, color):\n full_circles, surr_circles = flower_pattern\n\n circle_radius = radius \/ 3.0\n\n outer_circle = Circle(center, radius)\n plane.draw_circle(outer_circle, color)\n\n def transform(circle):\n circle_center = Point(circle.center.x*circle_radius + center.x,\n circle.center.y*circle_radius + center.y)\n return Circle(circle_center, circle_radius)\n\n full_circles = map(transform, full_circles)\n surr_circles = map(transform, surr_circles)\n\n for circle in full_circles:\n plane.draw_circle(circle, color)\n\n for circle in surr_circles:\n for full_circle in full_circles:\n draw_circle_filtered(plane, circle, full_circle, color)<\/code><\/pre>\n`Observe that for the surrounding cicles, we iterate through the list of full ciircles and pass each one as an argument to a function called draw_circle_filtered. So the next step is to inspect this function and see what it does. In this function, the circle argument is the surrounding circle and the enclosing_circle argument is the full circle that we have passed in the function above.<\/p>\n

The draw_circle_filtered function looks for intersection points between the circle argument and the enclosing_circle argument. If it finds none, it simply returns. But if two intersection points are found, it proceeds to work out the normalized angles from where the intersection points are on the circle. Then it has to figure out the order i.e. which one is start and which one is end. There is helper function called is_arc_outside_circle (which you can see in the repository on GitHub) that determines this order. Finally, we invoke Plane.draw_arc wit the correct start and end values. Here is the code.<\/p>\n

`def draw_circle_filtered(plane, circle, enclosing_circle, color):\n inter = enclosing_circle.intersection(circle)\n if len(inter) != 2:\n return\n else:\n rangles = []\n for p in inter:\n # Work out the angle from where the intersection point\n # is on the circle\n rad = acos((p.x - circle.center.x) \/ circle.radius)\n\n # Convert this angle in radians to a normalized angle in\n # the interval [0, 1]\n rangle = 0\n if p.y > circle.center.y:\n rangle = 0.5 + ((pi - rad) \/ (2*pi))\n else:\n rangle = rad \/ (2*pi)\n rangles.append(rangle)\n if is_arc_outside_circle(circle.center, circle.radius,\n rangles[0], rangles[1], enclosing_circle):\n rangles.reverse()\n plane.draw_arc(circle.center, circle.radius, rangles[0],\n rangles[1], color)<\/code><\/pre>\n`Now I want to remark that there is room for plenty of optimization in various parts of the code and at times simplicity is favored over efficiecy. But now, we have completed all the steps to algorithmically draw the traingular lattice form of overlapping circles in addition to the flower of life. Putting it all together, we have code that sets up two planes: a raster plane corresponding to our raster, namely ImageRaster (where will will produce a PNG) and an SVGPlane. Here is a code sniipet.<\/p>\n

`flower_pattern = create_flower_of_life()\n\nplanes = []\n\nimage_raster = ImageRaster(400, 300, geom2d.WHITE)\nimage_plane = RasterPlane(PLANE_WIDTH, PLANE_HEIGHT, image_screen)\nplanes.append(image_plane)\n\nsvg_plane = SVGPlane(PLANE_WIDTH, PLANE_HEIGHT, 'floweroflife.svg')\nplanes.append(svg_plane)\n\nfor plane in planes:\n draw_flower_of_life(plane, flower_pattern,\n Point(PLANE_WIDTH \/ 2.0, PLANE_HEIGHT \/ 2.0),\n FLOWER_RADIUS, geom2d.BLACK)\n\nimage_raster.save('floweroflife.png')\nsvg_plane.save()<\/code><\/pre>\n`

You can find all the code in the GitHub respository<\/a>. Finally, here is the SVG this code produces of the flower of life.\n<\/p>\n

Here is the PNG for completeness.\n<\/p>","raw":"My intention in this post is to discuss how to alogorithmically draw some geometric patterns that I find strikingly beautiful including the well-known Flower of Life. I will not comment on any significance such patterns may hold for certain people or how they have been used and interpreted. The focus here is on geometry and code. Once again, like my most recent posts, we will be using Python. First though I want to introduce a helpful building block that simplifies the process and allows us to focus more on the higher-level aspects.\n\n## The geom2d module\nI have written a simple module called 'geom2d' and put it [on GitHub](https:\/\/github.com\/ciphron\/geom2d). At the heart of geom2d is an abstraction called a Plane that allows you to draw basic geometric shapes to a 2D surface using floating-point coordinates. Here is the interface.\n```python\nclass Plane(object):\n def __init__(self, width, height):\n self._width = width\n self._height = height\n\n @property\n def width(self):\n return self._width\n\n @property\n def height(self):\n return self._height\n\n def draw_point(self, point, color):\n pass\n\n def draw_line(self, start, end, color):\n pass\n\n def draw_circle(self, circle, color):\n pass\n\n # start and end are normalized angles on the unit interval\n # moving in an anticlockwise direction\n def draw_arc(self, center, radius, start, end, color):\n pass\n```\nThe [GitHub respository](https:\/\/github.com\/ciphron\/geom2d) contains some useful implementations of Plane. For vector graphics, there is SVGPlane. And for rasters, there is RasterPlane, which requires a Raster object (an implementation of Raster is provided: ImageRaster).\n\n## Tringular Lattice Form of Overlapping Circles\nNow we turn to the common triangular lattice form of overlapping circles. We have a sequence of patterns of concentric hexagonal rings of circles. Each pattern in the sequence has a complete hexagonal ring of circles and contains `$C(n)$` circles where the sequence `$C(n)$` is 1, 1, 7, 19, 37, 61... (starting with `$n = 0$`); that is, in general, we have `$C(n) = 3n(n - 1) + 1$`. So how do we draw such patterns?\n\nThe construction is relatively straightforward. We begin with a center point `$(c_x, c_y)$` and radius `$r$` for the pattern. The circles then have radius `$r' = r \/ (n - 1)$`. Naturally, we have to handle the case `$n \\in \\{0, 1\\}$` separately and return a single circle centered at the given center point with radius `$r$`. For `$n >= 2$`, here is how we proceed. Note that we omit specifying the radius for the circles since each circle we draw will implicitly have radius `$r'$` We start off with two circles, the first with center `$(c_x, c_y)$` and the second with center `$(c_x, c_y - r')$`. So our list of circles contains both of these. Then we rely on a crucial function called perform_intersections. This function takes a list of circles, a centre `$(c'_x, c'_y)$` and a bounding radius `$b$`. It finds the intersection points of every circle in the list with every other circle in the list and if the distance between each point and `$(c'_x, c'_y)$` is less than or equal to `$b$`, it adds a circle to the list centered on that point. Then the process is repeated with the new circles until no more circles can be added. Here is the code, which admittedly needs some tidying up and optmization, but you will get a sketch of the algorithm.\n```python\ndef perform_intersections(circles, center, radius):\n if len(circles) == 0:\n return\n\n circle_radius = circles[0].radius\n points = set(map(lambda c: c.center, circles))\n index = 1\n \n while index < len(circles):\n sel_circle = circles[index]\n for i in range(index):\n other_circle = circles[i]\n inter = sel_circle.intersection(other_circle)\n for p in inter:\n pr = Point(round(p.x, PRECISION_DIGITS),\n round(p.y, PRECISION_DIGITS))\n if not is_near_identical(pr, points) \\\n and pr.distance_to(center) <= radius:\n circle = Circle(pr, circle_radius)\n circles.append(circle)\n points.add(pr)\n index += 1\n```\nNow we have completed the workhorse of the algorithm to draw these patterns. Our function to produce the patterns, which we name overlapping_circles, locates the pattern in local space with center `$(0, 0)$` and sets the radius of the circles to `$1.0$` so that it can be easily scaled and translated as required. We proceed by adding our two initial circles to the list as described above with radius `$1.0$` and then we call perform_intersections with thist list of circles, center point `$(0, 0)$` and bounding radius `$n - 1$` (plus some epsilon to take care of floating point errors). Here is the code.\n```python\ndef overlapping_circles(n):\n if n < 0:\n return []\n\n if n == 0 or n == 1:\n return [Circle(Point(0, 0), 1.0)]\n\n init_circles = []\n center = Point(0, 0)\n radius = 1.0\n circle1 = Circle(center, radius)\n init_circles.append(circle1)\n\n circle2 = Circle(Point(center.x, center.y - radius), radius)\n init_circles.append(circle2)\n\n circles = list(init_circles)\n perform_intersections(circles, center, (n - 1) + FP_THRESHOLD))\n return circles\n```\nObserve that calling overlapping_circles with argument `$n$`, we obtain a list of circles `$L_n$` such that `$\\mathsf{len}(L_n) = C(n)$`. And we get the desired result. We can then iterate through the list of circles and draw each one to the Plane using the method Plane.draw_circle. Here is an example of an image generated by the program which shows overlapping circles for `$n = 5$`.\n![Overlapping circles for n = 5](https:\/\/scss.tcd.ie\/~mclear\/blog\/images\/ocircles.png \"Overlapping circles for n = 5\")\n\n\n## The Flower of Life\nWe have a lot of the hard work done. The function to create the flower of life is very simple. The fower of life consists of an outer circle with radius 3.0 in local space and 19 \"full circles\" inside the outer circle. Then the remaining circles we term \"surrounding circles\" and they intersect with the full circles and we have to draw arcs at the points of intersection inside the full circles. So put simply, the flower of life pattern in our code is a pair consisting of firstly a list of full circles, and secondly, a list of surrounding circles. We leverage overlapping_circles to obtain these circles. The full circles are obtained by calling overlapping_circles with `$n = 3$` and the surrounding circles are obtained by calling overlapping_circles with `$n = 5$` and then filtering out the full circles. We note that it is easy to generalize this code and extend the pattern to a greater number of circles, but here we just stick to the common flower of life pattern. The code is simple.\n```python\ndef create_flower_of_life():\n full_circles = overlapping_circles(3)\n all_circles = overlapping_circles(5)\n surr_circles = list(set(all_circles).difference(set(full_circles)))\n return (full_circles, surr_circles)\n```\nThe next challenge is to draw these circles to the plane. The tricky part is the arcs. The first point I want to make is to recall the prototype of Plane.draw_arc. This function is passed a center point and a radius, which describes a particular circle, along with parameters called start and end. The parameters start and end are normalized angles in the interval `$[0, 1]$`. So 0 corresponds to zero radians and 1.0 corresponds to `$2\\pi$` radians. And the function draws the arc in an anti-clockwise direction. Therefore, if we have start=0 and end=0.25, we will get an arc around the upper-right quadrant of the circle. However, if we have start=0.25 and end=0, we will get an arc around the other three quadrants of the circle.\n\nFirst though, we will look at the high-level code to draw the flower of life.\n```python\ndef draw_flower_of_life2(plane, flower_pattern, center, radius, color):\n full_circles, surr_circles = flower_pattern\n\n circle_radius = radius \/ 3.0\n\n outer_circle = Circle(center, radius)\n plane.draw_circle(outer_circle, color)\n\n def transform(circle):\n circle_center = Point(circle.center.x*circle_radius + center.x,\n circle.center.y*circle_radius + center.y)\n return Circle(circle_center, circle_radius)\n\n full_circles = map(transform, full_circles)\n surr_circles = map(transform, surr_circles)\n\n for circle in full_circles:\n plane.draw_circle(circle, color)\n\n for circle in surr_circles:\n for full_circle in full_circles:\n draw_circle_filtered(plane, circle, full_circle, color)\n```\nObserve that for the surrounding cicles, we iterate through the list of full ciircles and pass each one as an argument to a function called draw_circle_filtered. So the next step is to inspect this function and see what it does. In this function, the circle argument is the surrounding circle and the enclosing_circle argument is the full circle that we have passed in the function above.\n\nThe draw_circle_filtered function looks for intersection points between the circle argument and the enclosing_circle argument. If it finds none, it simply returns. But if two intersection points are found, it proceeds to work out the normalized angles from where the intersection points are on the circle. Then it has to figure out the order i.e. which one is start and which one is end. There is helper function called is_arc_outside_circle (which you can see in the repository on GitHub) that determines this order. Finally, we invoke Plane.draw_arc wit the correct start and end values. Here is the code.\n```python\ndef draw_circle_filtered(plane, circle, enclosing_circle, color):\n inter = enclosing_circle.intersection(circle)\n if len(inter) != 2:\n return\n else:\n rangles = []\n for p in inter:\n # Work out the angle from where the intersection point\n # is on the circle\n rad = acos((p.x - circle.center.x) \/ circle.radius)\n\n # Convert this angle in radians to a normalized angle in\n # the interval [0, 1]\n rangle = 0\n if p.y > circle.center.y:\n rangle = 0.5 + ((pi - rad) \/ (2*pi))\n else:\n rangle = rad \/ (2*pi)\n rangles.append(rangle)\n if is_arc_outside_circle(circle.center, circle.radius,\n rangles[0], rangles[1], enclosing_circle):\n rangles.reverse()\n plane.draw_arc(circle.center, circle.radius, rangles[0],\n rangles[1], color)\n```\nNow I want to remark that there is room for plenty of optimization in various parts of the code and at times simplicity is favored over efficiecy. But now, we have completed all the steps to algorithmically draw the traingular lattice form of overlapping circles in addition to the flower of life. Putting it all together, we have code that sets up two planes: a raster plane corresponding to our raster, namely ImageRaster (where will will produce a PNG) and an SVGPlane. Here is a code sniipet.\n```python\nflower_pattern = create_flower_of_life()\n\nplanes = []\n\nimage_raster = ImageRaster(400, 300, geom2d.WHITE)\nimage_plane = RasterPlane(PLANE_WIDTH, PLANE_HEIGHT, image_screen)\nplanes.append(image_plane)\n\nsvg_plane = SVGPlane(PLANE_WIDTH, PLANE_HEIGHT, 'floweroflife.svg')\nplanes.append(svg_plane)\n\nfor plane in planes:\n draw_flower_of_life(plane, flower_pattern,\n Point(PLANE_WIDTH \/ 2.0, PLANE_HEIGHT \/ 2.0),\n FLOWER_RADIUS, geom2d.BLACK)\n\n\nimage_raster.save('floweroflife.png')\nsvg_plane.save()\n```\nYou can find all the code in the [GitHub respository](https:\/\/github.com\/ciphron\/floweroflife). Finally, here is the SVG this code produces of the flower of life.\n![Flower of Life](https:\/\/scss.tcd.ie\/~mclear\/blog\/images\/floweroflife.svg \"Flower of Life\")\n\nHere is the PNG for completeness.\n![Flower of Life](https:\/\/scss.tcd.ie\/~mclear\/blog\/images\/floweroflife.png \"Flower of Life\")\n","timestamp":1581950734,"edited":1581950734},{"updates":[],"title":"Breaking Anonymity in IBE from Higher Residuosity","slug":"anon","summary":"anon","image":"","tags":["anon"],"content":"

In this post I want to address anonymity of an identity-based encryption scheme in my paper from PKC 2019 available here<\/a>. Basically, the assumption presented in the paper to achieve anonymity for my scheme has been recently broken by Zhao et al. in this paper<\/a> by describing a generalized Galbraith test for an IBE scheme, such as that described in my paper (extended from Boneh, Lavigne and Sabin<\/a>), with `$e > 2$<\/code>. So the new cryptograhic assumption made in my paper, namely the Special Polynomial Equation Solvability (SPES) assumption, turns out not to be a true. I was taking the wrong approach in the brief cryptanalysis section in the paper. In this post, I will describe the new Galbraith Test of Zhao et al and how it breaks the assumption along with code for it in Sage. Before that, before descrbing SPES, I will recall a definition from my paper but familiarity with the notation and terminology from the paper is advised. <\/p>\n`

**Definition: Algebraic Equation Set<\/strong> <\/p>\n**

**\n**The algebraic equation set for a ciphertext `$c(x) \\in \\hat{C}$<\/code> with respect to ``$a = H(\\mathsf{id})$<\/code> is derived as follows. The unknowns are the coefficients ``$z_0, ..., z^{e - 1}$<\/code> of the polynomial ``$f(x)$<\/code> generated during encryption. Raising ``$f(x)$<\/code> to the power of ``$e$<\/code> and reducing according to the equivalence relation ``$x^e \\equiv a$<\/code> induced by the quotient of the ring ``$\\mathbb{Z}_N^\\ast[x]\/(x^e - a)$<\/code> yields a set of ``$e$<\/code> multivariate polynomials in ``$z_0, ..., z^{e - 1}$<\/code> of degree ``$e$<\/code>, one for each coefficient of the result. The algebraic equation set is formed by letting the polynomial for the ``$i$<\/code>-th coefficient of the result equal to ``$c_i$<\/code> for ``$i \\in {0, \\ldots, e - 1}$<\/code>. For example, the algebraic equation set for ``$e = 3$<\/code> is\n``$$\\begin{align} z_0^3 + a z_1^3 + a^2 z_2^3 + 6a z_0 z_1 z_2 = c_0 \\\\\\\\ 3z_0^2 z_1 + 3a z_0 z_2^2 + 3a z_1^2 z_2 = c_1 \\\\\\\\ 3z_0^2 z_2 + 3z_0 z_1^2 + 3a z_1 z_2^2 = c_2 \\end{align}$$<\/code><\/p>\n<\/blockquote>\n`

`So basically, SPES is the assumption that checking solvability of the above set of multivariate polynomial equations is hard. This turns out to be an invalid assumption because there is a better way to look at the problem that yields an efficient generalized Galbraith test that breaks anonymity in the Boneh, Lavigne and Sabin (BLS) scheme and related schemes.<\/p>\n`

`Galbraith Test of Zhao et al. for small ``$e \\ge 2$<\/code><\/h2>\n`I suggest reading Zhao et al. first; I will briefly recall the ideas here. Now let `$a = H(\\mathsf{id})$<\/code> and consider a ciphertext polynomial ``$tf(x)^e \\in \\mathbb{Z}_N[x]\/(x^e - a)$<\/code>, which is in simplified (albeit less efficient) BLS form as considered in my paper, where ``$t \\xleftarrow{\\$} \\mathbb{Z}_N$<\/code> and ``$f(x)$<\/code> is a random polynomial of degree ``$e - 1$<\/code> over ``$\\mathbb{Z}_N$<\/code>. Now consider the power residue symbol over a function field ``$\\mathbb{F}_p[x]$<\/code>, which we denote as ``$\\left(\\frac{\\cdot}{\\cdot}\\right)_{e, \\mathbb{F}_p}$<\/code>. The derivation in equation (1) in Section 5.2 of Zhao et al. shows that\n``\\[\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p} = 1 \\mod{\\mathfrak{p}}\\]<\/code> where ``$(\\mathfrak{p})$<\/code> is the ideal defined in Section 3.1 of my PKC paper. The same identity holds over ``$\\mathbb{F}_q[x]$<\/code> and with the ideal ``$(\\mathfrak{q})$<\/code>. If we compute the above with a random ciphertext, we will get a random residue symbol. The next question that arises is how to compute ``$\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p}$<\/code>. The trick is to repeatedly apply the general reciprocity law to ``$\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p}$<\/code> until we get a polynomial of degree ``$1$<\/code> i.e. we have\n``\\[\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p} \\equiv \\left(\\frac{c_p}{\\mathfrak{p}}\\right)_e \\left(\\frac{\\alpha_p}{\\mathfrak{\\Phi(x)}}\\right)_e \\mod{\\mathfrak{p}}\\]<\/code> where ``$\\alpha_p, c_p \\in \\mathbb{F}_p$<\/code> and ``$\\Phi(x) \\in \\mathbb{F}_p[x]$<\/code> with ``$\\mathsf{deg}(\\Phi(x)) = 1$<\/code>. The analogous congruence holds for ``$(\\mathfrak{q})$<\/code>. Of course we will be working over ``$\\mathbb{Z}_N[x]$<\/code> so when we apply the reciprocity law, we will get ``$\\gamma_N \\equiv \\alpha_p \\mod{p}$<\/code> (think of the analogous for ``$q$<\/code>) and ``$ c_N \\equiv c_p \\mod{p}$<\/code>. So in short, the Galbraith test ``$G_e$<\/code> for small ``$e \\ge 2$<\/code> is defined as follows\n``\\[G_e(a, c(x)) = \\left(\\frac{c_N\\gamma_N}{\\mathfrak{N}}\\right)_e\\]<\/code>. This is the familar power residue symbol we compute in our paper (i.e. with the ideal ``$(\\mathfrak{N})$<\/code>). For random ciphertexts ``$c(x)$<\/code>, the value ``$G_e(a, c(x))$<\/code> is conjectured by Zhao et al. to be uniform on the set of residue symbols whereas for a ciphertext that is genreated honestly with identity ``$\\mathsf{id}$<\/code> (recall that ``$a = H(\\mathsf{id})$<\/code>) then with all but negligible probability we have that ``$G_e(a, c(x)) = 1$<\/code>. Therefore, we have an efficient test that breaks anonymity in BLS and related schemes. So now, I will give the code in Sage that implements the Galbraith test and allows you to run it on honest ciphertexts and random ciphertexts and see the results. The variable name conventions follow Zhao et al. You must first call setup_ideal with parameters ``$N$<\/code>, ``$e$<\/code> and ``$\\mu$<\/code> where ``$\\mu$<\/code> is a generator in ``$\\mathbb{Z}_N$<\/code>. Because Sage does not implement an efficient algorithm for the residue symbol with composite (in our case, semiprime) ideals, you can only experiment for the moment with toy parameters i.e. small ``$N$<\/code>. The class BLSAnonTester can be instantiated with ``$R_{\\mathsf{id}} = H(\\mathsf{id})$<\/code>, prime ``$e$<\/code> and the ideal generated by setup_ideal. Then you can create honest ciphertexts or random ciphertexts and run the method gt to evaluate the Galbraith test. Now the code:<\/p>\n`

`\ndef setup_ideal(N, e, mu):\n F.<z> = NumberField(cyclotomic_polynomial(e))\n a1 = N*F.ring_of_integers() + (z - mu)*F.ring_of_integers()\n return a1\n\ndef find_params(a, b):\n c = 1\n while a.degree() > 0:\n next_a = b % a\n next_b = a\n\n c *= -1^(a.degree() * b.degree()) * a.lc()^b.degree() * b.lc()^(-a.degree())\n a = next_a\n b = next_b\n gamma = a\n return (c, gamma)\n\ndef rand_inv_elem(ZN):\n r = ZN.random_element()\n while not r.is_unit():\n r = ZN.random_element()\n return r\n\nclass BLSAnonTester(object):\n def __init__(self, Rid, e, a1):\n self.Rid = Rid\n self.e = e\n self.a1 = a1\n self.ZN = Rid.parent()\n R.<x> = ZN['x']\n self.R = R\n self.S = R.quotient(x^e - Rid)\n\n def gen_ct(self, m):\n f = self.S([rand_inv_elem(self.ZN) for i in range(self.e)])\n g = f^self.e\n t = rand_inv_elem(self.ZN)\n while J(t) != m:\n t = rand_inv_elem(self.ZN)\n return t*g\n\n def rand_ct(self):\n return self.S([rand_inv_elem(self.ZN) for i in range(self.e)])\n\n def J(self, t):\n z = self.a1.ring().gen()\n smap = {z^i : i for i in range(e)}\n return smap[self.a1.ring()(t).residue_symbol(self.a1, self.e)]\n\n def gt(self, c):\n c = self.R(c.list())\n modulus = self.R.gen()^self.e - self.Rid\n try :\n cn, gamma = find_params(c, modulus)\n return self.J(cn * gamma)\n except Exception as ee:\n return None\n<\/code><\/pre>\n`So that's it. We can break anonymity in BLS and related schemes. Thanks for reading and Happy Christmas.<\/p>","raw":"In this post I want to address anonymity of an identity-based encryption scheme in my paper from PKC 2019 available [here](https:\/\/eprint.iacr.org\/2019\/062.pdf). Basically, the assumption presented in the paper to achieve anonymity for my scheme has been recently broken by Zhao et al. in [this paper](https:\/\/eprint.iacr.org\/2019\/557.pdf) by describing a generalized Galbraith test for an IBE scheme, such as that described in my paper (extended from [Boneh, Lavigne and Sabin](https:\/\/ptolemy.berkeley.edu\/projects\/truststc\/education\/reu\/13\/Papers\/SabinM_Paper.pdf)), with `$e > 2$`. So the new cryptograhic assumption made in my paper, namely the Special Polynomial Equation Solvability (SPES) assumption, turns out not to be a true. I was taking the wrong approach in the brief cryptanalysis section in the paper. In this post, I will describe the new Galbraith Test of Zhao et al and how it breaks the assumption along with code for it in Sage. Before that, before descrbing SPES, I will recall a definition from my paper but familiarity with the notation and terminology from the paper is advised. \n\n**Definition: Algebraic Equation Set** \n> The algebraic equation set for a ciphertext `$c(x) \\in \\hat{C}$` with respect to `$a = H(\\mathsf{id})$` is derived as follows. The unknowns are the coefficients `$z_0, ..., z^{e - 1}$` of the polynomial `$f(x)$` generated during encryption. Raising `$f(x)$` to the power of `$e$` and reducing according to the equivalence relation `$x^e \\equiv a$` induced by the quotient of the ring `$\\mathbb{Z}_N^\\ast[x]\/(x^e - a)$` yields a set of `$e$` multivariate polynomials in `$z_0, ..., z^{e - 1}$` of degree `$e$`, one for each coefficient of the result. The algebraic equation set is formed by letting the polynomial for the `$i$`-th coefficient of the result equal to `$c_i$` for `$i \\in {0, \\ldots, e - 1}$`. For example, the algebraic equation set for `$e = 3$` is\n` $$\\begin{align}\n z_0^3 + a z_1^3 + a^2 z_2^3 + 6a z_0 z_1 z_2 = c_0 \\\\\\\\\n 3z_0^2 z_1 + 3a z_0 z_2^2 + 3a z_1^2 z_2 = c_1 \\\\\\\\\n 3z_0^2 z_2 + 3z_0 z_1^2 + 3a z_1 z_2^2 = c_2\n \\end{align}$$`\n\nSo basically, SPES is the assumption that checking solvability of the above set of multivariate polynomial equations is hard. This turns out to be an invalid assumption because there is a better way to look at the problem that yields an efficient generalized Galbraith test that breaks anonymity in the Boneh, Lavigne and Sabin (BLS) scheme and related schemes.\n\n## Galbraith Test of Zhao et al. for small `$e \\ge 2$`\nI suggest reading Zhao et al. first; I will briefly recall the ideas here. Now let `$a = H(\\mathsf{id})$` and consider a ciphertext polynomial `$tf(x)^e \\in \\mathbb{Z}_N[x]\/(x^e - a)$`, which is in simplified (albeit less efficient) BLS form as considered in my paper, where `$t \\xleftarrow{\\$} \\mathbb{Z}_N$` and `$f(x)$` is a random polynomial of degree `$e - 1$` over `$\\mathbb{Z}_N$`. Now consider the power residue symbol over a function field `$\\mathbb{F}_p[x]$`, which we denote as `$\\left(\\frac{\\cdot}{\\cdot}\\right)_{e, \\mathbb{F}_p}$`. The derivation in equation (1) in Section 5.2 of Zhao et al. shows that\n`\\[\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p} = 1 \\mod{\\mathfrak{p}}\\]` where `$(\\mathfrak{p})$` is the ideal defined in Section 3.1 of my PKC paper. The same identity holds over `$\\mathbb{F}_q[x]$` and with the ideal `$(\\mathfrak{q})$`. If we compute the above with a random ciphertext, we will get a random residue symbol. The next question that arises is how to compute `$\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p}$`. The trick is to repeatedly apply the general reciprocity law to `$\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p}$` until we get a polynomial of degree `$1$` i.e. we have\n`\\[\\left(\\frac{tf(x)^e}{x^e - a}\\right)_{e, \\mathbb{F}_p} \\equiv \\left(\\frac{c_p}{\\mathfrak{p}}\\right)_e \\left(\\frac{\\alpha_p}{\\mathfrak{\\Phi(x)}}\\right)_e \\mod{\\mathfrak{p}}\\]` where `$\\alpha_p, c_p \\in \\mathbb{F}_p$` and `$\\Phi(x) \\in \\mathbb{F}_p[x]$` with `$\\mathsf{deg}(\\Phi(x)) = 1$`. The analogous congruence holds for `$(\\mathfrak{q})$`. Of course we will be working over `$\\mathbb{Z}_N[x]$` so when we apply the reciprocity law, we will get `$\\gamma_N \\equiv \\alpha_p \\mod{p}$` (think of the analogous for `$q$`) and `$\\\nc_N \\equiv c_p \\mod{p}$`. So in short, the Galbraith test `$G_e$` for small `$e \\ge 2$` is defined as follows\n`\\[G_e(a, c(x)) = \\left(\\frac{c_N\\gamma_N}{\\mathfrak{N}}\\right)_e\\]`. This is the familar power residue symbol we compute in our paper (i.e. with the ideal `$(\\mathfrak{N})$`). For random ciphertexts `$c(x)$`, the value `$G_e(a, c(x))$` is conjectured by Zhao et al. to be uniform on the set of residue symbols whereas for a ciphertext that is genreated honestly with identity `$\\mathsf{id}$` (recall that `$a = H(\\mathsf{id})$`) then with all but negligible probability we have that `$G_e(a, c(x)) = 1$`. Therefore, we have an efficient test that breaks anonymity in BLS and related schemes. So now, I will give the code in Sage that implements the Galbraith test and allows you to run it on honest ciphertexts and random ciphertexts and see the results. The variable name conventions follow Zhao et al. You must first call setup_ideal with parameters `$N$`, `$e$` and `$\\mu$` where `$\\mu$` is a generator in `$\\mathbb{Z}_N$`. Because Sage does not implement an efficient algorithm for the residue symbol with composite (in our case, semiprime) ideals, you can only experiment for the moment with toy parameters i.e. small `$N$`. The class BLSAnonTester can be instantiated with `$R_{\\mathsf{id}} = H(\\mathsf{id})$`, prime `$e$` and the ideal generated by setup_ideal. Then you can create honest ciphertexts or random ciphertexts and run the method gt to evaluate the Galbraith test. Now the code:\n```python\n\ndef setup_ideal(N, e, mu):\n F. = NumberField(cyclotomic_polynomial(e))\n a1 = N*F.ring_of_integers() + (z - mu)*F.ring_of_integers()\n return a1\n\ndef find_params(a, b):\n c = 1\n while a.degree() > 0:\n next_a = b % a\n next_b = a\n \n c *= -1^(a.degree() * b.degree()) * a.lc()^b.degree() * b.lc()^(-a.degree())\n a = next_a\n b = next_b\n gamma = a\n return (c, gamma)\n\ndef rand_inv_elem(ZN):\n r = ZN.random_element()\n while not r.is_unit():\n r = ZN.random_element()\n return r\n\nclass BLSAnonTester(object):\n def __init__(self, Rid, e, a1):\n self.Rid = Rid\n self.e = e\n self.a1 = a1\n self.ZN = Rid.parent()\n R. = ZN['x']\n self.R = R\n self.S = R.quotient(x^e - Rid)\n \n def gen_ct(self, m):\n f = self.S([rand_inv_elem(self.ZN) for i in range(self.e)])\n g = f^self.e\n t = rand_inv_elem(self.ZN)\n while J(t) != m:\n t = rand_inv_elem(self.ZN)\n return t*g\n \n def rand_ct(self):\n return self.S([rand_inv_elem(self.ZN) for i in range(self.e)])\n \n def J(self, t):\n z = self.a1.ring().gen()\n smap = {z^i : i for i in range(e)}\n return smap[self.a1.ring()(t).residue_symbol(self.a1, self.e)]\n \n def gt(self, c):\n c = self.R(c.list())\n modulus = self.R.gen()^self.e - self.Rid\n try :\n cn, gamma = find_params(c, modulus)\n return self.J(cn * gamma)\n except Exception as ee:\n return None\n \n\n\n```\nSo that's it. We can break anonymity in BLS and related schemes. Thanks for reading and Happy Christmas.","timestamp":1577260285,"edited":1577260285},{"updates":[],"title":"Low-depth Circuits for Computing the Hamming Weight","slug":"hamming","summary":"hamming","image":"","tags":["hamming"],"content":"

Suppose we have a binary string with `$n$<\/code> bits. In this post, I will look at constructing (Boolean) arithmetic circuits that compute the hamming weight of a bitstring such that the circuits have minimal multiplicative depth. More precisely, we will develop algorithms handling arbitrary ``$n$<\/code> that generate circuits to compute the hamming weight of a bitstring with ``$n$<\/code> bits. Our circuits will consist of XOR and AND gates i.e. the Boolean case of arithmetic circuits. I have updated `

`SCDF<\/a> recently to include the algorithms described here and the code can be found on the SCDF github repository<\/a>. So the code here is in Python, leveraging the SCDF framwork.<\/p>\n`

`First Approach<\/h2>\n`Our first approach is probably the natural top-down construction. We use a Wallace-like tree structure that consists of adders. For simplicity, in this first approach, we will only handle an input bitstring whose length is a power of 2. Here is the code.<\/p>\n

`from base import *\nimport arith\nimport utils\n\ndef hamming_weight_add(v):\n n = len(v)\n if not utils.is_power_of_2(n):\n raise 'Length of vector is not a power of 2'\n elems = map(lambda x: [x], v)\n while n >= 2:\n n \/= 2\n for i in range(n):\n elems[i] = arith.ripple_add(elems[2*i], elems[2*i + 1])\n w = elems[0]\n\n return w<\/code><\/pre>\n`As usual with SCDF, we import from the base module, but here we also rely on the arith module and utils module which you can find in the SCDF repo. Now before computing the depth, we reduce the depth of the circuit by running scdf.reduce_depth. We will consider the depth of the circuit computing the most significant bit of the hamming weight. Invariably, we see that this is `$2\\log{n} - 1$<\/code> where ``$n$<\/code> is the number of input bits. However, we can do better than that, and this brings us to our next algorithm.<\/p>\n`

`Second Approach<\/h2>\n`This next approach is based on the idea that the `$i$<\/code>-th bit of the hamming weight can be computed as the XOR of all AND combinations of ``$2^i$<\/code> input bits. To see this, suppose we have 3 bits; ``$a, b$<\/code> and ``$c$<\/code>. Now the hamming weight is the sum of the first 2 bits added to the third bit. Easily we see that bit 0 is the XOR of the 3 bits. Now suppose we add the first two bits. Bit 0 of this sum is the XOR of the first two bits i.e. ``$a + b$<\/code>. Bit 1 of this sum is the carry of these first two bits i.e. both bits ANDed, ``$ab$<\/code>. Now we add the third bit. Bit 1 of the final sum is ``$ab + c\\cdot (a + b) = ab + ac + bc$<\/code>, hence all AND combinations of the input bits. This naturally extends for a greater number of input bits.<\/p>\n`

`Our algorithm builds upon and generalizes an approach by Cetin, Doroz, Sunar and Savas and handles vectors of any length. It builds up the products three at a time, attempting to minimize gates. Here is the code.<\/p>\n`

`from base import *\nimport arith\nfrom math import log\nimport utils\n\ndef hamming_weight_direct(v):\n c = v\n result_len = int(log(len(v), 2)) + 1\n print len(v), result_len\n result = [zero] * result_len\n for i in range(result_len):\n s = []\n n = len(c)\n rem = n % 3\n left_over = c[-rem:]\n for j in range(0, n - rem, 3):\n s.append(c[j] + c[j + 1] + c[j + 2])\n if rem != 0:\n s.append(sum(left_over, zero))\n\n result[i] = sum(s, zero)\n\n next_c = []\n\n for j in range(0, n - rem, 3):\n next_c.append(c[j]*c[j + 1] + c[j]*c[j + 2] + c[j + 1]*c[j + 2])\n if rem == 2:\n next_c.append(c[-1]*c[-2])\n for j in range(len(s)):\n t = zero\n for k in range(j + 1, len(s)):\n t += s[j]*s[k]\n next_c.append(t)\n\n c = next_c\n\n return result <\/code><\/pre>\n`If you run scdf.reduce_depth on this circuit followed by scdf.deduplicate to remove duplicate subcircuits then obtain the gate count, you will find that this approach has a higher gate count than the previous approach for the same `$n$<\/code>. However, this approach yields a smaller depth of exactly ``$\\log n$<\/code>.<\/p>","raw":"Suppose we have a binary string with `$n$` bits. In this post, I will look at constructing (Boolean) arithmetic circuits that compute the hamming weight of a bitstring such that the circuits have minimal multiplicative depth. More precisely, we will develop algorithms handling arbitrary `$n$` that generate circuits to compute the hamming weight of a bitstring with `$n$` bits. Our circuits will consist of XOR and AND gates i.e. the Boolean case of arithmetic circuits. I have updated [SCDF](https:\/\/scss.tcd.ie\/~mclear\/blog?post=scdf) recently to include the algorithms described here and the code can be found on the [SCDF github repository](https:\/\/github.com\/ciphron\/scdf). So the code here is in Python, leveraging the SCDF framwork.\n\nThe asymmtotic depth of an optimal hamming weight circuit is `$O(\\log n)$`. Both approaches I take here achieve this but have different constants. The first approach, which I will now describe, is less efficient in terms of depth but more efficient in terms of circuit size.\n\n## First Approach\nOur first approach is probably the natural top-down construction. We use a Wallace-like tree structure that consists of adders. For simplicity, in this first approach, we will only handle an input bitstring whose length is a power of 2. Here is the code.\n```python\nfrom base import *\nimport arith\nimport utils\n\ndef hamming_weight_add(v):\n n = len(v)\n if not utils.is_power_of_2(n):\n raise 'Length of vector is not a power of 2'\n elems = map(lambda x: [x], v)\n while n >= 2:\n n \/= 2\n for i in range(n):\n elems[i] = arith.ripple_add(elems[2*i], elems[2*i + 1])\n w = elems[0]\n\n return w\n```\nAs usual with SCDF, we import from the base module, but here we also rely on the arith module and utils module which you can find in the SCDF repo. Now before computing the depth, we reduce the depth of the circuit by running scdf.reduce_depth. We will consider the depth of the circuit computing the most significant bit of the hamming weight. Invariably, we see that this is `$2\\log{n} - 1$` where `$n$` is the number of input bits. However, we can do better than that, and this brings us to our next algorithm.\n\n## Second Approach\nThis next approach is based on the idea that the `$i$`-th bit of the hamming weight can be computed as the XOR of all AND combinations of `$2^i$` input bits. To see this, suppose we have 3 bits; `$a, b$` and `$c$`. Now the hamming weight is the sum of the first 2 bits added to the third bit. Easily we see that bit 0 is the XOR of the 3 bits. Now suppose we add the first two bits. Bit 0 of this sum is the XOR of the first two bits i.e. `$a + b$`. Bit 1 of this sum is the carry of these first two bits i.e. both bits ANDed, `$ab$`. Now we add the third bit. Bit 1 of the final sum is `$ab + c\\cdot (a + b) = ab + ac + bc$`, hence all AND combinations of the input bits. This naturally extends for a greater number of input bits.\n\nOur algorithm builds upon and generalizes an approach by Cetin, Doroz, Sunar and Savas and handles vectors of any length. It builds up the products three at a time, attempting to minimize gates. Here is the code.\n```python\nfrom base import *\nimport arith\nfrom math import log\nimport utils\n\ndef hamming_weight_direct(v):\n c = v\n result_len = int(log(len(v), 2)) + 1\n print len(v), result_len\n result = [zero] * result_len\n for i in range(result_len):\n s = []\n n = len(c)\n rem = n % 3\n left_over = c[-rem:]\n for j in range(0, n - rem, 3):\n s.append(c[j] + c[j + 1] + c[j + 2])\n if rem != 0:\n s.append(sum(left_over, zero))\n\n result[i] = sum(s, zero)\n \n next_c = []\n\n for j in range(0, n - rem, 3):\n next_c.append(c[j]*c[j + 1] + c[j]*c[j + 2] + c[j + 1]*c[j + 2])\n if rem == 2:\n next_c.append(c[-1]*c[-2])\n for j in range(len(s)):\n t = zero\n for k in range(j + 1, len(s)):\n t += s[j]*s[k]\n next_c.append(t)\n\n c = next_c\n \n return result \n```\nIf you run scdf.reduce_depth on this circuit followed by scdf.deduplicate to remove duplicate subcircuits then obtain the gate count, you will find that this approach has a higher gate count than the previous approach for the same `$n$`. However, this approach yields a smaller depth of exactly `$\\log n$`.","timestamp":1575087094,"edited":1575087094},{"updates":[],"title":"Parasitic Numbers","slug":"parasitic","summary":"parasitic","image":"","tags":["parasitic"],"content":"`

`When I wrote my `

`last post<\/a> on double trouble numbers, I wasn't aware of parasitic numbers, of which double trouble numbers are a special case (although our definition of double trouble numbers permitted leading zeros, something we will not permit here). In fact, there is a lot out there on these numbers including a good wikipedia article. However, I will take the same approach here as I took in my post on double trouble numbers, generalize it to ``$n$<\/code>-parasitic numbers and explore some further properties. I recommend that you read my post on double trouble numbers first because I will make use of the ideas it presents.<\/p>\n`

`def parasitic(n, b, d):\n \"\"\"Returns a list of the digits of the elementary n-parasitic number in base b \n with least significant digit d (ordered from least significant to most significant) given n, b and d\n \"\"\"\n\n assert n >= 2 and n < b and d >= n and d < b\n\n digits = [d]\n s = n * d\n while s != d:\n digits.append(s % b)\n s = (n * s) % (n*b - 1)\n return digits<\/code><\/pre>\n`We will now prove the following lemma.<\/p>\n

**Lemma<\/strong> ***The length of an elementary *`$n$<\/code>-parasitic number in base ``$b$<\/code> divides the multiplicative order of ``$b$<\/code> modulo ``$nb - 1$<\/code><\/em><\/p>\n`

*Proof<\/em> *`$\\;$<\/code>Let ``$x$<\/code> be an `*elementary<\/em> *`$n$<\/code>-parasitic number of length ``$\\ell$<\/code> with least significant digit ``$d$<\/code>. Let ``$m$<\/code> be the multiplicative order of ``$b$<\/code> modulo ``$nb - 1$<\/code>. We need to show that ``$\\ell \\mid m$<\/code>. The following equality follows immediately from the definition of an ``$n$<\/code>-parasitic number:\n``$$b^{\\ell - 1} + \\frac{x - d}{b} = nx$$<\/code>\nWith simple algebra, we can rearrange the above to yield\n``$$x = \\frac{d \\cdot (b^\\ell - 1)}{nb - 1}$$<\/code>\nNow for ``$x$<\/code> to be an integer as required, then ``$nb - 1$<\/code> must divide ``$d \\cdot (b^\\ell - 1)$<\/code>. A sufficient condition for this is that ``$b^\\ell \\equiv 1 \\pmod{nb - 1}$<\/code>, in which case ``$\\ell = m$<\/code>, the multiplicative order of ``$b$<\/code> modulo ``$nb - 1$<\/code>. Suppose ``$\\ell < m$<\/code>. In this case, it holds that ``$nb - 1 \\mid d \\cdot (t - 1)$<\/code> where ``$t = b^\\ell \\pmod{nb - 1}$<\/code>. Now from the equality above, there is a larger ``$n$<\/code>-parasitic number ``$x'$<\/code> with the same least significant digit ``$d$<\/code> whose length is ``$m$<\/code> and which is equal to ``$\\frac{d \\cdot (b^m - 1)}{nb - 1}$<\/code>. From the generalization of our result in the double trouble numbers post, we know that all ``$n$<\/code>-parasitic numbers with the same least significant digit are unique up to concatenation. Since ``$x$<\/code> is elementary and ``$x'$<\/code> is formed by concatenating copies of ``$x$<\/code>, it follows that ``$\\ell \\mid m$<\/code>. ``$$\\tag*{$\\blacksquare$}$$<\/code><\/p>\n`

`As a corollary, it holds that ``$\\ell \\mid \\phi(nb - 1)$<\/code> where ``$\\phi$<\/code> is Euler's totient function. Furthermore, we note that the multiplicative order of ``$b$<\/code> modulo ``$nb - 1$<\/code> is equal to the multiplicative order of ``$n$<\/code> modulo ``$nb - 1$<\/code>.<\/p>\n`

`Let us define an equivalence relation on ``$n$<\/code>-parasitic numbers. We write ``$x \\sim_{n, b} y$<\/code> if the ``$n$<\/code>-parasitic numbers ``$x$<\/code> and ``$y$<\/code> are circular permutations of each other in base ``$b$<\/code>. Let ``$P_{n, b}$<\/code> be the set of ``$n$<\/code>-parasitic numbers in base ``$b$<\/code>. The number of equivalence classes ``$|P_{n, b}\/\\sim_{n, b}|$<\/code> under the relation ``$\\sim_{n, b}$<\/code> is related to the number of cyclotomic cosets of ``$n$<\/code> modulo ``$nb - 1$<\/code>. More formally, we have the following equality\n``$$|P_{n, b}\/\\sim_{n, b}| = |\\{C_d : d \\in \\{k, \\ldots, b - 1\\}\\}|$$<\/code> where ``$C_d := \\{d \\cdot n^k : k \\in \\{0, \\ldots, nb - 1\\}\\}$<\/code>. For shorthand, let us define ``$a_{n}(b) = |P_{n, b}\/\\sim_{n, b}|$<\/code>.<\/p>\n`

`The sequence generated by ``$a_2(3), a_2(4), \\ldots$<\/code> is the sequence `

`A006694<\/a> in the OEIS. More precisely, ``$a_2(b) = \\text{A006694}(b - 1)$<\/code>.<\/p>\n`

`We will now take a look at the length of ``$n$<\/code>-parasitic numbers, in particular, the bases that give minimal and maximal lengths. Recall that the minimal length of an ``$n$<\/code>-parasitic number is 2. The sequence of bases that give minimal length ``$n$<\/code>-parasitic numbers is generated by ``$s_n(k) = (n + 1)k + n$<\/code> for ``$k \\geq 1$<\/code>. To see this, observe that for any ``$k \\geq 1$<\/code>, setting the least significant digit to ``$nk + 1$<\/code> yields an ``$n$<\/code>-parasitic number of length 2 in base ``$(n + 1)k + n$<\/code>. For ``$n = 2$<\/code>, we have that ``$s_2$<\/code> generates the sequence `

`A016789<\/a> in the OEIS.<\/p>\n`

`Now in regard to maximal length ``$n$<\/code>-parasitic numbers, recall that the maximal length in base ``$b$<\/code> is ``$nb - 2$<\/code>. If a base has a maximal length ``$n$<\/code>-parasitic number, then it follows immediately from our corollary above that ``$nb - 1$<\/code> is prime. The converse does not necessarily hold. For certain choices of ``$n$<\/code>, there are no maximal length ``$n$<\/code>-parasitic numbers. An example is ``$n = 4$<\/code> because any base ``$b$<\/code> is a quadratic residue modulo ``$4b - 1$<\/code> whose square roots are ``$2b$<\/code> and ``$2b - 1$<\/code> and therefore its multiplicative order modulo ``$4b - 1$<\/code> cannot be ``$4b - 2$<\/code>. We do not have a general form for the bases that give maximal lengths. An interesting special case is ``$n = 2$<\/code> where the sequence of bases giving maximal lengths is the sequence `

`A051733<\/a> in the OEIS.<\/p>\n`

`def transition(b, state):\n \"\"\"State transition function with base b.\"\"\"\n\n d, c = state\n return ((2 * d + c) % b, (2 * d) \/\/ b)\n\ndef find_min_dt(b):\n \"\"\" Return a list of the digits of the minumum (by value) DT number for base b ordered from least significant to most significant. \"\"\"\n\n min_digits = None\n min_len = 2 * b\n for lsd in range(1, b):\n digits = [lsd]\n init_state = (lsd, 0)\n state = transition(b, init_state)\n while state != init_state:\n d, c = state\n digits.append(d)\n state = transition(b, state)\n if len(digits) < min_len:\n min_digits = digits\n min_len = len(digits)\n return min_digits<\/code><\/pre>","raw":"A few years ago I came across a programming problem concerning numbers that were referred to as Double Trouble (DT) numbers, and I will borrow the term here. I looked into them with a friend, Colm Bhandal, because we found them interesting. I will share some of their properties in this post.\n\nFirst, what is a DT number? A DT number with respect to some base `$b$` is a positive integer `$n$` such that when the digits (in base `$b$`) are right rotated one position, the resulting number is `$2n$` i.e. double the original number. Some of the questions we will answer are: are there an infinite number of DT numbers for a given base and what form do they take? We will first prove a result that will help answer these questions, namely we will show that for every base `$b \\geq 2$`, there exists a DT number with least significant digit `$d$` for all `$d \\in \\{1, \\ldots, b - 1\\}$`. The proof proceeds as follows.\n\nLet `$d_{k - 1}b^{k - 1} + d_{k - 2}b^{k - 2} + \\cdots + d_0$` be a `$k$`-digit DT number. Then by definition of a DT number, we have\n`$$d_0b^{k - 1} + d_{k - 1}b^{k - 2} + \\cdots + d_1 = 2 \\cdot (d_{k - 1}b^{k - 1} + d_{k - 2}b^{k - 2} + \\cdots + d_0)$$`\nThis implies that `$d_1 = 2d_0$` and `$d_2 = 2d_1 + c$` where `$c \\in \\{0, 1\\}$` is the carry of `$2 \\cdot d_0$` i.e. `$c = \\lfloor 2d_0 \/ b \\rfloor$`. More generally, we have `$d_{i + 1} = 2d_i + c_i$` and `$c_{i + 1} = \\lfloor 2d_i \/ b \\rfloor$` with `$c_0 = 0$` and `$d_0$` being the least significant digit. \nWe will view the pair `$(d_i, c_i)$` as a *state*. The state transition function `$\\delta : \\mathbb{Z}_b \\times \\{0, 1\\} \\to \\mathbb{Z}_b \\times \\{0, 1\\}$` is defined as above; that is\n`$$\\delta : (d_i, c_i) \\mapsto (2d_i + c_i, \\lfloor 2d_i \/ b \\rfloor)$$`\nThe starting state for a DT number corresponding to its least significant digit is `$(d_0, 0)$` where `$d_0 \\in \\mathbb{Z}_b$` is the least significant digit. A DT number is generated when the state returns to `$(d_0, 0)$` and the digits of the DT number are the `$d_i$` components of the intermediate states in addition to the starting state. The state will always return to the starting state i.e. it will not get stuck in cycles that exclude the starting state. To see this, observe that every state has a unique predecessor state, and therefore, necessarily, the state must return to the starting state. Hence, starting with any least significant digit, we can generate a finite length DT number in any given base. This proves the result.\n\nThe maximum length of what we call an *elementary* DT number is `$2(b - 1)$` digits since there are a toral of `$2b$` possible states and the states `$(0, 0)$` and `$(b - 1, 1)$` cannot be encountered because a DT number is positive and the latter state transitions to itself. By *elementary*, we mean that the DT number is the minimum length DT number with a chosen least significant digit. The minimum length of a DT number is 2 digits. Now given a DT number with `$k$` digits, concatenating the number (joining its sequence of digits) with itself yields another (non-elementary) DT number with `$2k$` digits. Indeed repeating the process yields DT numbers whose length are arbitrary multiples of `$k$`. And this tells us, there are an infinite number of DT numbers for any chosen base `$b \\geq 2$`.\n\nAn algorithm in Python to find the minimum (by value) DT number for a chosen base is shown below.\n```python\ndef transition(b, state):\n \"\"\"State transition function with base b.\"\"\"\n\n d, c = state\n return ((2 * d + c) % b, (2 * d) \/\/ b)\n\ndef find_min_dt(b):\n \"\"\" Return a list of the digits of the minumum (by value) DT number for base b ordered from least significant to most significant. \"\"\"\n\n min_digits = None\n min_len = 2 * b\n for lsd in range(1, b):\n \tdigits = [lsd]\n init_state = (lsd, 0)\n state = transition(b, init_state)\n while state != init_state:\n d, c = state\n digits.append(d)\n state = transition(b, state)\n if len(digits) < min_len:\n min_digits = digits\n min_len = len(digits)\n return min_digits\n```","timestamp":1545973976,"edited":1545973976},{"updates":[],"title":"Simple Circuit Description Framework","slug":"scdf","summary":"SCDF","image":"","tags":["scdf"],"content":"`

I have deprecated SCDL<\/a>. It is much simpler and more terse to express arithmetic circuits in Python. Simple Circuit Description Framework (SCDF) is a simple Python framework to express, manipulate and evaluate arithmetic circuits with particular suitability for fully homomorphic encryption (FHE). It is available at https:\/\/github.com\/ciphron\/scdf<\/a><\/a>. I intend to write\/use Python bindings for the FHE library HElib to port my FHE Evaluator<\/a> to Python and integrate it with this framework.<\/p>\n

*First let us define some constants.<\/p>\n*

`import scdf\n\nzero = scdf.Constant(0)\none = scdf.Constant(1)<\/code><\/pre>\n`Now we are ready to define the equaltiy checking circuit for two bits.<\/p>\n

`def eq1(x, y):\n return x + y + one<\/code><\/pre>\n`Next we will define the equality checking circuit for two arbitrary length integers (of equal length). Each integer is given as a list of bits (more precisely, a wire that evaluates to a bit) ordered from least significant to most significant.<\/p>\n

`def eq(xs, ys):\n is_eq = one\n for i in range(len(xs)):\n is_eq *= eq1(xs[i], ys[i])\n return is_eq<\/code><\/pre>\n`

The code so far can be found in the file base.py<\/a> included with scdf.<\/p>\n

Now we have defined our desired circuit. The next step is to setup the inputs.<\/p>\n

`NUM_BITS = 8 # we will use 8-bit integers\nx = vect_input('x', NUM_BITS)\ny = vect_input('y', NUM_BITS)<\/code><\/pre>\n`We also have to setup the circuit to use the above inputs. Since, the depth of the circuit (eq) that we defined may not be optimal, we will also avail of the scdf.reduce_depth function.<\/p>\n

`circ = scdf.reduce_depth(eq(x, y))<\/code><\/pre>\n`To evaluate a circuit, we have to specify what type the values are, and the addition and multiplicaion operators must be overloaded by this type. For this purpose, we define a simple Bit class.<\/p>\n

`class Bit(object):\n def __init__(self, value):\n self.value = value\n\n def __mul__(self, other):\n return Bit((self.value * other.value) % 2)\n\n def __add__(self, other):\n return Bit((self.value + self.value) % 2)<\/code><\/pre>\n`To evaluate the circuit, we need to setup an input map, which maps the names of inputs to their values. In our case, the values will be of type Bit. We have the following helper functions to aid in this task.<\/p>\n

` def to_bin(n, nbits):\n nbin = [0] * nbits\n i = 0\n while i < nbits and n != 0:\n nbin[i] = n & 1\n n >>= 1\n i += 1\n return nbin\n\n def add_num_to_map(name, num, input_map):\n bin_rep = to_bin(num, NUM_BITS)\n for i in range(NUM_BITS):\n input_map['%s%d' % (name, i)] = Bit(bin_rep[i])<\/code><\/pre>\n`Suppose the first integer, x, is set to 12 and the second integer, y, is set to 13. We write the following code:<\/p>\n

` input_map = {}\n add_num_to_map('x', 12, input_map)\n add_num_to_map('y', 13, input_map)<\/code><\/pre>\n`Finally, we are ready to evaluate the circuit and print the result.<\/p>\n

` map_constant = lambda v: Bit(v)\n result = scdf.eval_circuit(circ, input_map, map_constant)\n print result.value<\/code><\/pre>\n`Note the map_constant function maps constant values given as integers to instances of the appropriate type, in our case, Bit.<\/p>\n

The above code is a gentle illustartion of scdf. For a more complex circuit, see the Direct Sort sorting circuit in sort.py<\/a>. Many FHE schemes support batching. They operate on vectors and support SIMD operations. We model this in scdf also by permitting left and right vector rotations. In sort.py<\/a>, a SIMD variant of the Direct Sort algorithm is implemented and you can see how to use it in test_sort.py<\/a>.<\/p>","raw":"\nI have deprecated [SCDL](https:\/\/scss.tcd.ie\/~mclear\/blog?post=scdl). It is much simpler and more terse to express arithmetic circuits in Python. Simple Circuit Description Framework (SCDF) is a simple Python framework to express, manipulate and evaluate arithmetic circuits with particular suitability for fully homomorphic encryption (FHE). It is available at [https:\/\/github.com\/ciphron\/scdf](https:\/\/github.com\/ciphron\/scdf). I intend to write\/use Python bindings for the FHE library HElib to port my [FHE Evaluator](https:\/\/scss.tcd.ie\/~mclear\/blog?post=fhe) to Python and integrate it with this framework.\n\nCircuits are composed of two basic operations: addition (+) and multiplication (\\*). We will consider an example of a Boolean circuit. In the Boolean case, + corresponds to XOR and \\* corresponds to AND. In the following example, we will define and evaluate a circuit that checks the equality of two 8-bit integers.\n\nFirst let us define some constants.\n```python\nimport scdf\n\nzero = scdf.Constant(0)\none = scdf.Constant(1)\n```\nNow we are ready to define the equaltiy checking circuit for two bits.\n```python\ndef eq1(x, y):\n return x + y + one\n```\nNext we will define the equality checking circuit for two arbitrary length integers (of equal length). Each integer is given as a list of bits (more precisely, a wire that evaluates to a bit) ordered from least significant to most significant.\n```python\ndef eq(xs, ys):\n is_eq = one\n for i in range(len(xs)):\n \tis_eq *= eq1(xs[i], ys[i])\n return is_eq\n```\n\nThe code so far can be found in the file [base.py](https:\/\/github.com\/ciphron\/scdf\/blob\/master\/base.py) included with scdf.\n\nNow we have defined our desired circuit. The next step is to setup the inputs.\n```python\nNUM_BITS = 8 # we will use 8-bit integers\nx = vect_input('x', NUM_BITS)\ny = vect_input('y', NUM_BITS)\n```\nWe also have to setup the circuit to use the above inputs. Since, the depth of the circuit (eq) that we defined may not be optimal, we will also avail of the scdf.reduce_depth function.\n```python\ncirc = scdf.reduce_depth(eq(x, y))\n```\nTo evaluate a circuit, we have to specify what type the values are, and the addition and multiplicaion operators must be overloaded by this type. For this purpose, we define a simple Bit class.\n```python\nclass Bit(object):\n def __init__(self, value):\n self.value = value\n\n def __mul__(self, other):\n return Bit((self.value * other.value) % 2)\n\n def __add__(self, other):\n return Bit((self.value + self.value) % 2)\n```\nTo evaluate the circuit, we need to setup an input map, which maps the names of inputs to their values. In our case, the values will be of type Bit. We have the following helper functions to aid in this task.\n\n```python\n def to_bin(n, nbits):\n nbin = [0] * nbits\n i = 0\n while i < nbits and n != 0:\n nbin[i] = n & 1\n n >>= 1\n i += 1\n return nbin\n\n def add_num_to_map(name, num, input_map):\n bin_rep = to_bin(num, NUM_BITS)\n for i in range(NUM_BITS):\n input_map['%s%d' % (name, i)] = Bit(bin_rep[i])\n```\nSuppose the first integer, x, is set to 12 and the second integer, y, is set to 13. We write the following code:\n```python\n input_map = {}\n add_num_to_map('x', 12, input_map)\n add_num_to_map('y', 13, input_map)\n```\nFinally, we are ready to evaluate the circuit and print the result.\n```python\n map_constant = lambda v: Bit(v)\n result = scdf.eval_circuit(circ, input_map, map_constant)\n print result.value\n```\nNote the map_constant function maps constant values given as integers to instances of the appropriate type, in our case, Bit.\n\nThe above code is a gentle illustartion of scdf. For a more complex circuit, see the Direct Sort sorting circuit in [sort.py](https:\/\/github.com\/ciphron\/scdf\/blob\/master\/sort.py). Many FHE schemes support batching. They operate on vectors and support SIMD operations. We model this in scdf also by permitting left and right vector rotations. In [sort.py](https:\/\/github.com\/ciphron\/scdf\/blob\/master\/sort.py), a SIMD variant of the Direct Sort algorithm is implemented and you can see how to use it in [test_sort.py](https:\/\/github.com\/ciphron\/scdf\/blob\/master\/test_sort.py).","timestamp":1545197885,"edited":1545197885},{"updates":[],"title":"Fully Homomorphic Encryption Evaluator","slug":"fhe","summary":"FHE Evaluator","image":"","tags":["fhe"],"content":"

I wrote a fully homomorphic encryption (FHE) evaluator which you can find at https:\/\/github.com\/ciphron\/fhe<\/a><\/a>. This software runs programs written in SCDL<\/a> (see the link for compiling imperative languages to SCDL) on encrypted data. It allows you to generate a public and private key, encrypt the inputs of an SCDL program to produce a ciphertext file and evaluate the SCDL program homomorphically with a chosen ciphertext file. The FHE evaluator uses HElib<\/a> in the backend, which is an implementation of the BGV<\/a> scheme. When you generate a keypair, you must choose a security parameter and a maximum circuit depth to support. The former corresponds to the security strength (in bits) e.g: 128 bits. The latter comes into play because the current version of HElib does not implement bootstrapping and therefore, only circuits of bounded depth can be evaluated. The program allows you to check the depth of a circuit described in an SCDL file. Using the evaluator, you will quickly see that FHE is presently by no means practical. However you will be able to run small simple programs on encrypted data. I must point out that my implementation does not exploit the full power of the HElib library insofar as vectors are not encrypted, only single bits. This means the ciphertexts are far less compact than they could be. I hope to add vector support at a later stage. Also, the implementation probably has bugs which need to be ironed out. Email me with details if you find any. Hopefully, this program will be useful to someone for experimentation or research use.<\/p>","raw":"\nI wrote a fully homomorphic encryption (FHE) evaluator which you can find at [https:\/\/github.com\/ciphron\/fhe](https:\/\/github.com\/ciphron\/fhe). This software runs programs written in [SCDL](\/blog?post=scdl) (see the link for compiling imperative languages to SCDL) on encrypted data. It allows you to generate a public and private key, encrypt the inputs of an SCDL program to produce a ciphertext file and evaluate the SCDL program homomorphically with a chosen ciphertext file. The FHE evaluator uses [HElib](https:\/\/github.com\/shaih\/HElib) in the backend, which is an implementation of the [BGV](https:\/\/eprint.iacr.org\/2011\/277) scheme. When you generate a keypair, you must choose a security parameter and a maximum circuit depth to support. The former corresponds to the security strength (in bits) e.g: 128 bits. The latter comes into play because the current version of HElib does not implement bootstrapping and therefore, only circuits of bounded depth can be evaluated. The program allows you to check the depth of a circuit described in an SCDL file. Using the evaluator, you will quickly see that FHE is presently by no means practical. However you will be able to run small simple programs on encrypted data. I must point out that my implementation does not exploit the full power of the HElib library insofar as vectors are not encrypted, only single bits. This means the ciphertexts are far less compact than they could be. I hope to add vector support at a later stage. Also, the implementation probably has bugs which need to be ironed out. Email me with details if you find any. Hopefully, this program will be useful to someone for experimentation or research use.\n","timestamp":1487166794,"edited":1487166794},{"updates":[],"title":"Simple Circuit Description Language","slug":"scdl","summary":"SCDL","image":"","tags":["scdl"],"content":"

Simple Circuit Description Language (SCDL) is a simple functional language to describe an arithmetic circuit over some ring. It has two basic operations, addition (+) and multiplication (*). The language can be interpreted in any ring. For example, when interpreted in the Boolean field, "+" corresponds to XOR and "<\/em>" corresponds to AND. I developed this language for the purpose of Fully Homomorphic Encryption (FHE). FHE allows computation to be performed on encrypted data but current schemes evaluate the computation as a circuit. Most such schemes evaluate circuits that consist of AND and XOR gates. So far an evaluator has been implemented for SCDL that can be found at **https:\/\/github.com\/ciphron\/scdl<\/a><\/a>. For documentation on the language and how to use the evaluator, see https:\/\/github.com\/ciphron\/scdl\/blob\/master\/scdl.pdf<\/a><\/a>.<\/p>\n*

*FairplayMP Simple Function Description Language (SFDL) v2.0 Compiler to SCDL<\/h3>\n*The FairplayMP project (http:\/\/www.cs.huji.ac.il\/project\/Fairplay\/FairplayMP.html<\/a><\/a>) incorporates a compiler for a language called Simple Function Description Language (SFDL) that translates the SFDL code into a Boolean circuit. SFDL version 2.0 (http:\/\/www.cs.huji.ac.il\/project\/Fairplay\/FairplayMP\/SFDL2.0.pdf<\/a>) is a simple imperative language with some constraints such as bounded loops (i.e. supporting a maximum number of iterations given by a constant) and no recursion. The circuit that is generated by the SFDLv2.0 compiler is represented by a low-level language created by the FairplayMP developers. I've written a compiler that translates that low-level language into SCDL. A vars file (see the SCDL documentation) is also automatically produced that gives a high-level description of the inputs and outputs to a program in terms of their names and types. The compiler is available at https:\/\/github.com\/ciphron\/sfdl_to_scdl<\/a><\/a>. This compiler allows you to simply compile an SFDL file into SCDL (it first runs the SFDLv2.0 compiler in the background). Say you want to compile the included example max.sfdl, then simply run .\/compile max.sdfl. Two files are produced: max.scdl and the associated vars file max.scdl.vars. Then you can use the SCDL evaluator to evaluate the circuits and the outputs are printed according to their types.<\/p>","raw":"\nSimple Circuit Description Language (SCDL) is a simple functional language to describe an arithmetic circuit over some ring. It has two basic operations, addition (+) and multiplication (\\*). The language can be interpreted in any ring. For example, when interpreted in the Boolean field, \"+\" corresponds to XOR and \"*\" corresponds to AND. I developed this language for the purpose of Fully Homomorphic Encryption (FHE). FHE allows computation to be performed on encrypted data but current schemes evaluate the computation as a circuit. Most such schemes evaluate circuits that consist of AND and XOR gates. So far an evaluator has been implemented for SCDL that can be found at [https:\/\/github.com\/ciphron\/scdl](\"https:\/\/github.com\/ciphron\/scdl\"). For documentation on the language and how to use the evaluator, see [https:\/\/github.com\/ciphron\/scdl\/blob\/master\/scdl.pdf](https:\/\/github.com\/ciphron\/scdl\/blob\/master\/scdl.pdf).\n\n\n### FairplayMP Simple Function Description Language (SFDL) v2.0 Compiler to SCDL\nThe FairplayMP project ([http:\/\/www.cs.huji.ac.il\/project\/Fairplay\/FairplayMP.html](http:\/\/www.cs.huji.ac.il\/project\/Fairplay\/FairplayMP.html)) incorporates a compiler for a language called Simple Function Description Language (SFDL) that translates the SFDL code into a Boolean circuit. SFDL version 2.0 (http:\/\/www.cs.huji.ac.il\/project\/Fairplay\/FairplayMP\/SFDL2.0.pdf) is a simple imperative language with some constraints such as bounded loops (i.e. supporting a maximum number of iterations given by a constant) and no recursion. The circuit that is generated by the SFDLv2.0 compiler is represented by a low-level language created by the FairplayMP developers. I've written a compiler that translates that low-level language into SCDL. A vars file (see the SCDL documentation) is also automatically produced that gives a high-level description of the inputs and outputs to a program in terms of their names and types. The compiler is available at [https:\/\/github.com\/ciphron\/sfdl_to_scdl](https:\/\/github.com\/ciphron\/sfdl_to_scdl). This compiler allows you to simply compile an SFDL file into SCDL (it first runs the SFDLv2.0 compiler in the background). Say you want to compile the included example max.sfdl, then simply run .\/compile max.sdfl. Two files are produced: max.scdl and the associated vars file max.scdl.vars. Then you can use the SCDL evaluator to evaluate the circuits and the outputs are printed according to their types.\n","timestamp":1487166730,"edited":1487166730}]