We first experiment with LTM to examine the observed frequencies of Boolean functions in simulation. built a network with \(N = 10000\) node and \(k = 2\) input, within mean degrees \(z = [0, \frac{1}{2}, 1, 2, 4, 8, 16, 64, 256, 1024, 4096, 10000]\)over 10000 trials each *Ze* worth.on average \(z = 4\), we find that the frequency of the function is highly skewed just after the appearance of GCC (Fig. 3a).the experiment of \(k = 4\) input again \(N = 10000\) average degree node \(z = 4\), ensembled over 500 realizations, also yielding an approximate exponential decay of the rank-order function (Fig. 3b). (See Supplementary Information A.4.)

To explore skewed distributions of these functions, *“What is the probability of getting the simplest network to compute each of these functions?”*From . (Fig. 1), we can derive the probability of each monotonic function.For example, if there is no path from the seed node *a* When *b* to some node *you* we got \(f_0\)therefore

$$\begin{aligned} p(f_0) \propto(1 – p_{path})^2, \end{aligned}$$

Where \(p_{path}\) The probability of a path between two randomly chosen nodes.

function \(f_1\) I need a path from *a* When *b* To *you*therefore

$$\begin{aligned} p(f_1) \propto p_{path}^2.\end{aligned}$$

(1)

However, with percolation in mind, for large graphs, *n* all nodes *n* Node belongs to Giant Connectivity Components (GCC)^{25,37}.

This again, from (Fig. 1),

$$\begin{aligned} p(f_1) \propto p_{path(a,b,u)} \propto p_{gcc}^3, \end{aligned}$$

Where \(p_{gcc}\) The probability that a random node belongs to GCC.

let’s define \(v = P_{gcc}\). from^{37}we know that *v* is implicitly given by the relation

$$\begin{aligned}v = 1 – e^{-zv}, \end{aligned}$$

(2)

Where *Ze* It’s average. (See Supplementary Information A.3 for details.)

Then the number of required paths from the seed node to the node is *you*which computes the monotonic function *What*,be equivalent to *decision tree complexity *(*Ha*), the depth of the shortest decision tree to compute *What*. for *you* Seed perturbation information must be sent along the path to determine the value of the seed node. *you*.

Hamming Cube Representation of a Boolean Function, Its Decision Tree Complexity *Ha* complement the number of congruent coaxial reflections *R.* along each axis *D.* (details in Supplementary Information A.1). In other words, if the Hamming Cube of a boolean function is constant along an axis, it is independent of that axis, so

$$\begin{aligned} C = D – R. \end{aligned}$$

(3)

In other words, *The number of paths required by a monotonic Boolean function is exactly the same as the number of axial reflection asymmetries of its Hamming cube.* This allows us to relate the symmetry of the function to the complexity of the decision tree and its frequency. Recall that the critical percolation threshold for an arbitrarily large Erdos-Renyi-Gilbert graph occurs at the critical mean order. \(z_c = 1\), very little connectivity.ever since \(p_c = \frac{z_c}{N-1}\)the critical connection probability \(p_c \ll 1\)The network therefore contains a GCC and remains tree-like even with the practical average degree above. \(z_c = 1\)the clustering coefficient \(C_{\mathrm{clus}} \approx p\)^{twenty five}. (example: \(N=10000\) When \(z = 10\), \(C_{clus} \about 1/1000\).) In a tree, the number of nodes is one more than the number of edges. \(N = |E| + 1\). therefore, \(p \rightarrow p_c\),

$$\begin{aligned} p(f) \propto p_{gcc}^{C(f) + 1}.\end{aligned}$$

(Four)

Indeed, the formula looks like this: (4) is highly correlated with the probabilities obtained from logic motifs. (1) and the frequency of the observed function is proportional to equation (1). (4) Similarly (Fig. 3a), there is a Pearson correlation of about 1.0 for k = 2 and 0.74 for k = 4. (4) an inverse rank-order relationship between frequency and decision tree complexity, where frequency appears to decrease exponentially. This result is particularly surprising given the exponential distribution of decision tree complexity increasing in the truth table of all Boolean functions, as mentioned at the beginning.

### Antagonistic functional distribution

In a similar simulation, \(N = 10000\) node, \(k = 2\) Input combining 500 or more trials over a range of average degree values *Ze* and percentage of antagonistic nodules \(\theta \in \{0, \frac{1}{6}, \frac{2}{6},… 1\}\)which reveals a unique nonzero function and a sharp increase in the number of both *Ze* When \(\theta\) (Fig. 4a). The number of unique functions is maximized over several orders of magnitude near criticality. \(z\in [2^3, 2^{10}\)]When \(\theta = 1/3\)Observe that antagonism and inhibition are interchangeable^{12} (Supplementary Information A.2), which supports optimal information processing. \(30 \%\) Inhibition found in other studies^{38}and why this segment of inhibitory neurons appears to be biologically prevalent.

A similar function rank ordering is also observed here for this combination of LTM and ALTM nodes. \(z = 64, \theta = 1/3\)and, like LTM, the frequency is again proportional to the probability derived from the complexity of the function (Fig. 4b), with a Pearson correlation of 0.91.

However, note equation (1). (3) underestimate the number of passes required for non-monotonic functions; for example, \(f_6\) (XOR) requires 6 paths between 5 nodes, all of which must be in GCC (Figure 5). \(p(f_6) \propto p_{gcc}^5\)However, the complexity of the decision tree for this function \(C = 2\), predicted by the formula. (4) it \(p(f_6) \propto p_{gcc}^3\)Therefore, non-monotonic functions require a more informative measure of complexity.