Verification of linearity in registration efficiency.
The registration efficiency depends on the energy of the alpha particles reaching the surface of the detector. For the source-to-detector distance of 1.5 cm, the alpha particle reaches the surface with an average energy of 4 MeV. Several factors must be optimized to adjust the condition for linearity in registration efficiency: (1) The range of alpha particles having an energy of 4 MeV in the PADC detector is about 20.5 \(\upmu\)m20,21. The etching time (\(t_{e}\)) should be much less than the etching time to reach track depth at 20.5 \(\upmu\)m (denoted \(t_{R}\)), which leads to detecting all incident alpha particles on the PADC detector22. The minimal etching time will cause a minor track diameter and lower the detectability of the track using an ordinary optical microscope. (2) Another requirement is to adjust the registration efficiency associated with the coalescence of registered tracks when the fluence exceeded a specific limit, i.e., determination of the maximum number of tracks registered per unit area to maintain linearity between track density and exposure time. (3) Normality test for the registered tracks upon optimized conditions.
Three sets of samples were exposed to alpha particles for durations between 30 s and 300 s; the first set was etched chemically in 6.25 N NaOH at 70 \(^\circ\)C for 2 h. The second and the third set were etched in the same chemical conditions but for 4 h, and 6 h, respectively, see Table 1. Images were recorded through the collimated area up to 1583 \(\times\) 1583 \(\upmu\)m\({}^{2}\). Since there was a need to optimize the detectible track diameter, the track size distribution was determined from the two pre-samples after being chemically etched in 6.25 N NaOH at 70 \(^\circ\)C for 4 h, see Fig. 2. Despite both photomicrographs being produced under the same conditions, namely exposure time, etching time, alpha particle energy, and PADC detector, one can recognize the randomness in the registered patterns and the extension of the registration area to cover all the 1583 \(\times\) 1583 \(\upmu\)m\({}^{2}\) area. This assures that the alpha particle source is isotropically distributed behind this area.

Comparison between the distribution of alpha particles tracks free areas in two photomicrographs (a) for sample b4/1 and (b) for sample b4/2 of the PADC detector irradiated with 4 MeV alpha particle for 1 min with a time interval of 10 min, samples were chemically etched in 6.25 N NaOH at 70 \(^\circ\)C for 4 h. The area of each picture is 1583 \(\times\) 1583 \(\upmu\)m\({}^{2}\).
The track diameter distribution histogram in Fig. 3 shows that the alpha particle track diameter follows Gaussian distribution centered at (8.77 ± 0.33 \(\upmu\)m). The small value of standard deviation points to the independence of the track diameter on the difference between alpha particle energies originating from the \(^{241}\)Am source on one side and the good efficiency of the etching process on the other one.

Histogram of the alpha tracks diameter in the PADC detector for samples d4/1 and d4/2.
The histogram in Fig. 4 illustrates the variation of track density with time; these photomicrographs are for the samples chemically etched for 4 h. The area of each picture is 1583 \(\times\) 1583 \(\upmu\)m\({}^{2}\). The observed circular track diameters ranged from 8.4 to 9.1 \(\upmu\)m. Several coalescent tracks are apparent in Fig. 4d–f due to the expansion of the tracks to nearby ones. For low exposure time, the track densities are low, and no significant alpha tracks coalescence was observed; for instance, for irradiation time of 30 seconds, the alpha track density is (7.3 ± 0.7)\(\times 10^{4}\) tracks.cm\({}^{-2}\). For alpha particle irradiation time of 1 min, the alpha track density is (12.5 ± 0.8)\(\times 10^{4}\) tracks cm\({}^{-2}\); for a maximum exposure time of 5 min, the alpha track density amounts to (502 ± 1)\(\times 10^{4}\) tracks cm\({}^{-2}\).

Photomicrographs of alpha particle tracks in PADC detector exposed for alpha particles of 4 MeV for different durations (a) 0.5 min, (b) 1 min, (c) 2 min, (d) 3 min, (e) 4 min, and (f) 5 min, PADC detector is chemically etched in 6.25 N NaOH at 70 ± 1 \({}^{\circ }\)C for 4 h. The area of the picture is 1583 \(\times\) 1583 \(\upmu\)m\({}^{2}\).
A comparison of the response of the PADC detectors to exposed alpha particles for different durations and different etching times was undertaken by measurements of the track density in each of the samples listed in Table 1. These samples of the PADC detectors are chemically etched for 2 h, 4 h, and 6 h in 6.25 N NaOH at 70 ± 1 \({}^{\circ }\)C, in which the response of the detector depends on the detectible track after etching.
As shown in Fig. 5 for 4 h etching time, the tracks density and exposure time plot offered a non-linearity in registration efficiency. As exposure time increases, the diameters of alpha particle tracks are growing, and therefore coalescence and registered as one alpha reducing the track density, especially for longer exposure time. Similar non-linearity, nearly independent between the tracks density and exposure time, was evident for a larger etching time of 6 h.

Alpha particle track density dependence on the exposure time at different etching times.
For etching time of 2 h, the alpha particle track diameters were 4.9 ± 0.2 \(\upmu\)m, the alpha particle track densities were linearly correlated with irradiation time to alpha particle up to 240 s of exposure as depicted in Fig. 5. On the other hand, the maximum linearity for the samples etched for 4 h is 120 s of exposure time. In such circumstances, the linear registration of all spatially incident alpha particles grants minimum loss of information. Hence, patterns in samples f2, f4, and all samples etched for 6 h will give a biased conclusion on the extracted information.
Randomness analysis
Analyzing patterns is a well-established branch of computational science and information technology23. The most concerning abstraction is the point pattern analysis (PPA)24 which involves the analysis of the spatial location of points in the multi-dimensional array (mostly two-dimensional). These analyses reveal deep laying information in such patterns. The divergence analysis is achieved by adopting the procedure of quadrat sampling to test its probability distributions and a statistical model to give predicted probabilities that may be compared with each of the individual probabilities in the observed frequency distribution. The theoretical probability distribution is obtained by making the sensible assumption of the randomness of the registration governing the evolution of the features in the pattern. From those assumptions, we deduced the probability distribution that will give the correct prediction of the frequency distribution of the quadrats. Finally, a comparison between the predicted probability distribution with the observed probability distribution obtained by sampling the pattern was made using Kullback–Leibler divergence based on the Shannon entropy hypothesis. There are no particular restrictions for the shape and size of a quadrat if the size is reasonable compared to the area under investigation. The selection of quadrat size is always an arbitrary procedure but may influence the subsequent interpretation of results. One of the most used treatments of quadrat size is the approach taken by Greig-Smith25. On the basis that randomness at a variety of scales within a square quadrat census where the number of cells on each axis is some power of 2, based on the binary property of the Poisson distribution that it mean \(\lambda\) equals its variance. However, in search for evidence of clustering at that scale, Greig-Smith has suggested that the size of quadrat at that scale will be related to the mean area of the pattern in which the test described here does not measure tendencies towards uniformity in the pattern. In the present work, we forced the quadrant area to follow the relation
$$\begin{aligned} A_{QS} =\sqrt{2} \frac{A}{N_{t} } \end{aligned}$$
(1)
Where A is the studied area and \(N_{t}\) the total number of features in the whole pattern.
According to Poisson distribution, the null hypothesis of alpha tracks in SSNTD is the equal probability to hit any location in the exposed area, which implies that the number of hits is proportional to the detector area A according to Poisson probability distribution. However, if there were clustering and dispersion in the pattern registered, the distribution would be different.
Poisson probability distribution (\(q=q_{i} =q(x_{i} )\)) of the number of features that will occur in a quadrat is
$$\begin{aligned} q(x_{i} )=\frac{\lambda _{i}^{x} }{\Gamma (x_{i} +1)} e^{-\lambda } , \end{aligned}$$
(2)
which gives the random probability a number of \(x_{i}\) events occur while being hit, \(\lambda\) is the intensity function describes both the mean expected value and the variance of the distribution given from the relation;
$$\begin{aligned} \lambda =\frac{N_{P} }{N_{PQ} } \end{aligned}$$
(3)
where \(N_{P}\) is the total number of features in the registered pattern within the investigated clip and \(N_{PQ}\) is the number of quadrats to which the study area is divided. This analysis was undertaken for the samples a2, b2, c3, d2, e2, f2, a4, b4/1, b4/2, b4/3, c4, d4/1, e4, and f4, as labeled in Table 1.
The comparison in Fig. 6 shows a heatmap for the number of features in each quadrate, \(x_{i} =N\), counted in all investigated areas; The total number of quadrates depends on the condition in Eq. (1). Detailed information is given in Table 2. Generally, the statistics rely on the value of \(\lambda\). So Eq. (1) grantee the closeness of the results upon comparison.

Heatmap for the investigated local track densities of samples a2, b2, c3, d2, e2, and f2 after being etched for 2 h. The grayscale represents the number of features in each quadrate, \(x_{i}\), while the number of quadrates increases progressively to fulfill Eq. (1). The dashed squares (red color online) show clipped areas containing quadrats used for Poisson distribution analysis; see text.
The histograms for the probability of a number of features \(x_{i}\) within each quadrate deduced from the frequency statistics of the number of quadrats having a count \(x_{i}\) divided by the total number of counts\(N_{P}\) are shown in Fig. 7. For comparison, the Poisson distribution given in Eq. (2) was calculated assuming the exact value of the mean\(\lambda\).

Probability histograms for the test hypothesis p for samples a2, b2, c3, d2, e2, and f2 versus the null hypothesis q versus the number of features \(x_{i}\) within each quadrate. The null hypothesis is randomness based on Poisson distribution, in Eq. (2) assuming the same value of \(\lambda\) as given in Table 2.
The photomicrographs imaged for the samples a4, b4/1, b4/2, b4/3, c4, d4/1, e4, and f4 were analyzed using the same method. The results are illustrated in Figs. 8, 9, and 10.

Constructed as in Fig. 6 for the investigated local track densities of samples a4, b4/1, c4, d4/1, e4, and f4 after being etched for 4 h.

Probability histogram for the test hypothesis p for samples a4, b4/1, c4, d4/1, e4, and f4 versus the null hypothesis q versus the number of features \(x_{i}\) within each quadrate. They were constructed as in Fig. 7.
Entropy and divergence
The amount of information concerning the variability of a random variable (uncertainty in randomness) of a statistical system of events directly indicates the system’s Shannon entropy26.
$$\begin{aligned} H(p)=-\frac{1}{\log N_{pQ} } \sum _{i}p_{i} \log p_{i} \end{aligned}$$
(4)
where \(p=\{ p_{i} \}\) is the probability of the event in the \(i\in \chi\) quadrat that belongs to the same probability space,\(\chi\), of the observables. The most crucial concept in Shannon’s Entropy is its ability to measure the extent to which the data are spread out over its possible values; lower entropy values refer to high information content and are most likely to develop a strong rule or correlation. For random data, the Shannon entropy value is equal to 1. In other terms, increased observability must lead to decreased uncertainty and entropy.
The difference between true random process and signal and more deterministic processes can be obtained using the Kullback–Leibler divergence (KLD)27 divergence theoretical models28. KLD is a measure of dissimilarity between two probability distributions p and \(q=\{ q_{i} \}\) usually represents the probability distribution of data, the observations, and the probability distribution of its representing random model optimized for p. For the discrete case of data29, KLD comprises
$$\begin{aligned} D_{\mathrm{KLD}} (p\parallel q)=\sum _{i}p_{i} \log \left( \frac{p_{i} }{q_{i} } \right) \end{aligned}$$
(5)
The positive value of \(D_{\mathrm{KLD}} (p{||}q)\) represents the information gain achieved from p instead of the random model q. Based on Bayesian inference, \(D_{\mathrm{KLD}} (p\parallel q)\) is the information gained upon measurement having posterior probability distribution p compared to the priori known probability distribution q or vice versa, the lost inference when forced random distribution q is used instead of measured p30. The value of \(D_{\mathrm{KLD}} (p\parallel q)\) goes to zero as the two probability distributions become the same.
The \(D_{\mathrm{KLD}} (p\parallel q)\)results are given in Table 2 and embedded within Figs. 7, 9, and 10. The last-mentioned figures are calculated based on the prior probabilities of random events based on Poisson distribution. The greater the prior uncertainty of such an occurrence, the greater the information gained if such a non-random event occurs. Criteria for defining an information statistic suggest that the measure would vary from zero to infinity and that the measure would be additive between independent events. The result showed embedded information within the track pattern. Information could be extracted from patterns in samples a2, b2, c2, d2, e2, a4, b4/1, c4, d4/1, and e4. However, this information could be misleading due to other effects, as discussed below.

Comparison between heatmap for the investigated local track densities of samples (a) b4/1, (c) b4/2, and (e) b4/3 and its associated probability histograms (b), (d), and (f), respectively. Constructed as Figs. 7 and 8.
Clustering and dispersion analysis
Dispersion, skewness, and other major parameters can be clued from the central tendency analysis. While Clustering requires density analysis (including entropy and convergence) and distance analysis using pair correlation function (radial distribution function) and Ripley’s K function31 involved in the spatial analysis method. The basic descriptive centric technique for a real data analysis is the featured center (\(X_{c}\),\(Y_{c}\)) in which
$$\begin{aligned}X_{c} =\frac{1}{N\sum _{i=1}^{N}w_{i} } \sum _{i=1}^{N}w_{i} X_{i} , \end{aligned}$$
(6)
$$\begin{aligned}Y_{c} =\frac{1}{N\sum _{i=1}^{N}w_{i} } \sum _{i=1}^{N}w_{i} Y_{i} \end{aligned}$$
(7)
\(w_{i}\) is the weighting factor for the feature, which may be considered a reciprocal uncertainty of the existence of that point (\(X_{i}\),\(Y_{i}\)) within the center area of the feature. For definitely shaped points, \(w_{i}\)=1, the variance of the distribution of the data may be different in the directions X and Y,
$$\begin{aligned}\sigma _{Y}^{2} =\frac{1}{(N-1)\sum _{i=1}^{N}w_{i} } \sum _{i=1}^{N}w_{i} \left( Y_{i} -Y_{c} \right) ^{2} \end{aligned}$$
(8)
$$\begin{aligned}\sigma _{X}^{2} =\frac{1}{(N-1)\sum _{i=1}^{N}w_{i} } \sum _{i=1}^{N}w_{i} \left( X_{i} -X_{c} \right) ^{2} . \end{aligned}$$
(9)
The distribution deviation is determined by the relation
$$\begin{aligned} \sigma _{D}^{2} =\frac{\sigma _{X}^{2} +\sigma _{Y}^{2} }{2} , \end{aligned}$$
(10)
while the quality of the distribution is determined by the relation
$$\begin{aligned} Q_{D}^{} =\left| \frac{\sigma _{X}^{2} -\sigma _{Y}^{2} }{2} \right| . \end{aligned}$$
(11)
The standard deviation in two dimensions is defined by
$$\begin{aligned} \sigma ^{2} =\frac{\sum _{i=1}^{N}w_{i} \left( \left( X_{i} -X_{c} \right) ^{2} +\left( Y_{i} -Y_{c} \right) ^{2} \right) }{(N-2)\sum _{i=1}^{N}w_{i} } . \end{aligned}$$
(12)
The (N-2) provides an unbiased estimate of standard distance since there are two constants related to a real deviation. Note that \(\sigma ^{2} \ne \sigma _{D}^{2}\) if a circular clip of pattern was taken whether or not \(Q_{D} \ne 0\).
As shown in Table 2, the photomicrograph patterns of the alpha particle tracks do not offer a uniform spatial distribution around the center of the data. The \(Q_{D}\)values span a range from 0.005 to 0.247. A large value of \(Q_{D}\) at low exposure time is attributed to a limited number of registered tracks in the detectors to attain random data. Hence, patterns a2, b2, a4, and b4/1 contain remnant information of randomness despite a2, and a4 have large values of entropy divergence.Similarly, at a large exposure time of 300 s, tracks coalescence may disturb the gained information. At the intermediate track densities, the value of \(Q_{D}\) begins to reach 0, the nominal value of random track registration. Conversely, its value may increase due to the accumulation of clustering information within the alpha particle tracks.
The empirical K-function
The empirical distribution function is the pairwise distances used to search for anomalies in the feature patterns. The second moment of this distribution function is the differential Radial Distribution Function (RDF) as a function of distance r. Our focus is on the distance or spacing between features in the registered pattern. Each ordered pair of points had a measured distance \(d_{i,j} =||r_{i} -r_{j} ||\) which may contain the information about the alpha particles’ spatial pattern.
There are two different definitions used in the present work of the RDF as a function of distance r, first,
$$\begin{aligned} H_{1} (r_{i} )=\frac{1}{{\hat{\lambda }}} \sum _{i=1} \vec {1}(r_{i-1}<d_{i,c} <r_{i} ) \end{aligned}$$
(13)
\(d_{i,c}\) is the distance between the features labeled i and the center of the data, \(\vec {1}\)(condition) is the indicator function for the satisfaction of the condition and \({\hat{\lambda }}\) is the average number of features per unit area. In this case, the cutoff radial distance is the radius of the clipped pattern (denoted \(d_{c} )\). Also, as a function of pair distances
$$\begin{aligned} H_{2} (r_{i} )= N\frac{\sum _{i}\sum _{j\ne i} \vec {1}(d_{i,j} <r_{i}) }{{\hat{\lambda }}({\text {No. of interdistances}})} \end{aligned}$$
(14)
The condition \(d_{i,j} <d_{c} -r_{i}\) was introduced to enforce the calculation to run only to the pattern within the clipped circle and reduce the edge effect of the data counting. Value is normalized to the new counted features. The plot of the functions \(H_{1} (r_{i} )\)and \(H_{2} (r_{i} )\) is shown in Figs. 11 and 12.

Plot of the radial distribution function \(H_{1} (r_{i} )\)and \(H_{2} (r_{i} )\)for samples a2, b2, c2, d2, e2, and f2 after being etched for 2 h.

Plot of the radial distribution function \(H_{1} (r_{i} )\)and \(H_{2} (r_{i} )\)for samples a4, b4/1, c4, d4/1, e4, and f4 after being etched for 4 h.
The variation of the central distribution function reveals a sort of correlation in the pattern, which may be symmetric around the center of the registered pattern. Such central tendency is a consequence of the existence of the collimator and the inverse-square law in which the emitted alpha particles from the source may be inclined along the diagonal of the collimator rather than parallel to its axis. The pair RDF cannot detect such a pattern due to its moving average nature.
Here, the radial distribution function \(H_{1} (r_{i} )\)and \(H_{2} (r_{i} )\)gave two crucial pieces of information. \(H_{1} (r_{i} )\)gives the distribution around the center of the data from Eqs. (6) and (7), in which the central symmetry of the function compensates for the effect of the non-randomness of the data. The data is truncated at the end of the field-of-view (FoV). \(H_{2} (r_{i} )\), on the other hand, is a pair distribution function that is sensitive to clustering and dispersion of the pattern and the edge effect. Collimation of alpha particles on a determined region on the detector causes two effects: (1) the pairs near the edge of the area have fewer neighbors from one side, which lesser the value of \(H_{2} (r_{i} )\)near the end of the FoV, (2) remote intradistant neighbors from the other directions gave a value of \(H_{2} (r_{i} )\)at distances greater than the end of FoV. The difference between these orders, within a radius of about 1/3 of the diameter of the data, is another clue to the existence of clustering or dispersion in the pattern, as shown in Fig. 11. Similar behavior was observed for these samples etched for 4 h (see Fig. 12).
The K-function, on the other hand, is a Cumulative Radial Distribution Function (CRDF), obtained based on \(H_{1} (r_{i} )\) and \(H_{2} (r_{i} )\), as
$$\begin{aligned}K_{1} (r)=\sum _{\begin{array}{c} i=1 \\ r_{i} \le r \end{array}} w_{i,c} H_{1} (r_{i} ) \end{aligned}$$
(15)
$$\begin{aligned}K_{2} (r)=\sum _{\begin{array}{c} i=1 \\ r_{i} \le r \end{array}} w_{i,c} H_{2} (r_{i} ) \end{aligned}$$
(16)
\(w_{i,c}\) is a generally used edge-corrected estimator. The weight function has a value between 0 and 1 that provides higher weight to the points needing the center of the investigated area rather than the points at the edge. In the present work, we shall use \(w_{i,c} =1\). i.e.
$$\begin{aligned}K_{1} (r)=\frac{1}{{\hat{\lambda }}} \sum _{i} 1(d_{i,c} <r) \end{aligned}$$
(17)
$$\begin{aligned}K_{2} (r)=N\frac{ \sum _{i}\sum _{j\ne i} 1(d_{i,j}<r_{i} ;d_{i,j} <d_{c} -r_{i} )}{{\hat{\lambda }}({\text {No. of points}})} \end{aligned}$$
(18)
The plot of the functions \(K_{1} (r_{i} )\)and \(K_{2} (r_{i} )\) are shown in Figs. 13 and 14. It is obvious that the \(K_{1\;\mathrm{or }\; 2}\)-functions and \(H_{2}\) -function do not uniquely define the pattern. Still, they can be used to detect if there were a direct interaction between processes causing the pattern, i.e., two different patterns may have the same K-function, see Refs.32,33.

Plot of the cumulative radial distribution function \(K_{1} (r)\)and \(K_{2} (r)\)for samples a2, b2, c2, d2, e2, and f2. The values of \(\pi r^{2}\) were added for comparison.

Plot of the cumulative radial distribution function \(K_{1} (r)\)and \(K_{2} (r)\)for samples a4, b4/1, c4, d4/1, e4, and f4. The values of \(\pi r^{2}\) were added for comparison.
In this aspect, The null hypothesis of the K-function is that the number of features lying closer than a distance r has expected value \(K_{1\;\mathrm{or }\; 2} (r)\), i.e., the variation as \(\pi r^{2}\) and deviations from this expectation indicate scales of clustering and/or dispersion34,35. An inhibited process causes a lake of formation of the feature and will usually have \(K_{1\;\mathrm{or }\; 2} (r)<\pi r^{2}\), while an enhanced process causes clustered feature and will have \(K_{2} (r)>\pi r^{2}\), for appropriate values of r. While \(K_{1}\) is related to the nearest-center distribution and related mainly to the anisotropy of the radial signature of the features, \(K_{2}\) is associated with the nearest-neighbor distribution and is related to the none-stationary processes causing the feature, also known as Ripley’s K36,37.
Consequently, trends of \({{\varvec{K}}}_{1} (r)\)follow the \(\pi r^{2}\) trend to the end of FoV while the trends of \({{\varvec{K}}}_{2} (r)\)follow the \(\pi r^{2}\) trend up to 1/3 of the diameter of the collimator (2/3 of the radius to the end of FoV).
Because of the difficulty of comparison, we consider the difference L-Functions
$$\begin{aligned}L_{1} (r)=\sqrt{\frac{k_{1} (r)}{\pi } } -r \end{aligned}$$
(19)
$$\begin{aligned}L_{2} (r)=\sqrt{\frac{k_{2} (r)}{\pi } } -r \end{aligned}$$
(20)
These functions are plotted in Figs. 15 and 16. Since the L-function has the dimension of distance, the confidence interval is just the confidence interval of each feature, i.e., the 0.3 \(\upmu\)m deduced from the analysis of track diameter. The values of L-function greater than this value represent correlation among clusters, which occur at about around 200–500 \(\upmu\)m.

Plot of the L-functions \(L_{1} (r)\) and \(L_{2} (r)\) for samples a2, b2, c2, d2, e2, and f2.

Plot of the L-functions \(L_{1} (r)\) and \(L_{2} (r)\) for samples a4, b4/1, c4, d4/1, e4, and f4.
From these results, the essential reasonable information could be extracted from samples c2 and c4. However, c4 has more features, as depicted in Table 1. Hence, sample c4 was the most candidate pattern to extract information.
Proximity analysis
To find out what is near or within a certain distance of one or more features, we use a common geographic information system process that includes a buffer as a tool that creates a new feature class of buffer polygons around a specified input feature based on some factor. Our factor is the reciprocal of nearest neighbor distance (NND). The value of NND is inversely proportional to the density of the features and tells much about whether data points are clustered or dispersed. In this aspect, Fig. 17 shows the proximity analysis based on that buffer obtained from the analysis of radial distribution.

Proximity analysis diagram based on a buffer of reciprocal NND. Semitransparent circles show some correlated features.
The data of the proximity analysis in Fig. 17 showed long-range correlational symmetry between regions of clusters. Semitransparent circles show some correlated features having a dimension of diameter 100 \(\upmu\)m. Similar clustering in the registered tracks has been observed in all samples. These are obvious in the heatmaps in Figs. 6, 8, and 10. The origin of such a correlation is unknown. The present results throw doubt on the validity on the random model of estimating the trajectory of alpha particles, especially in the case of use of isotopic source as an initiator for the neutron emitting reactions (e.g. \(^{241}\)AmBe neutron source38). In radiation detection, the phenomenology of assessing the radioactivity may be influenced by the configuration mixing between particle decay and its interaction39 and interfere with the possible time varying-decay rates40,41,42,43,44,45,46,47,48,49,50.