The Deff = Dmax × Φ(C) formula is a measurement tool. But what you measure depends on what you feed it. The same formula, applied to three different representations of the same source material, reveals three different layers of organization. Each scanner failed forward into the next — and each lens may still hold value for what it was designed to see.
v1 — Character Frequency Scanner (Retired)
Layer: Material — what elements are present.
Method: Applies Deff to single-character frequency distribution of written text.
What happened: Initial cross-civilizational text analysis showed high similarity scores (up to 98%). Adversarial control testing revealed the flaw: v1 was measuring the statistical properties of the English language itself, not the coherence of the content. Scrambled text scored identically to originals. A grocery list scored 94% similar to the Atharvaveda. The scanner was measuring letter frequency distribution — a property shared by all English text regardless of meaning or origin.
Status: Retracted for coherence measurement. Original claims removed. However, as a frequency distribution tool, v1 remains a valid statistical instrument — it accurately measures what elements are present and how they are distributed. It simply cannot distinguish coherent from incoherent arrangements of those elements.
v2 — Multi-Scale Structure Scanner (Active)
Layer: Structural — how elements are arranged in sequence.
Method: Applies Deff at five levels of text organization: characters, bigrams (letter pairs), trigrams (letter triples), words, and word pairs.
What it detects: Structural disruption. When text is scrambled, word-pair coherence drops up to 40%. This confirms v2 captures real sequential organization that random rearrangement destroys. It distinguishes organized text from scrambled text.
Limitation: Cannot reliably distinguish between types of organized text. A well-structured grocery list and a sacred hymn may score similarly at the structural level, because both have valid sequential organization. Structure alone does not capture what makes a chant different from a sentence.
v3 — Resonance Contour Scanner (Active — February 18, 2026)
Layer: Resonance — how sounds interact harmonically when voiced.
Method: Maps each syllable of a text to its dominant phonetic formant frequency (in Hz), then analyzes the resulting frequency contour for harmonic coherence — how often the intervals between consecutive syllables land on harmonic ratios (octaves, fifths, and other whole-number frequency relationships).
The insight: Many sacred texts were never meant to be read. The Vedas were oral for centuries before being written. "Qur'an" means "the reciting." Gregorian chants were engineered for resonance in stone. Aboriginal songlines are literally songs. These are sound architectures encoded as text. Measuring the text misses the point. Measuring the sound catches it.
Results (February 18, 2026): Five sacred chants from five independent traditions were compared against three mundane speech controls. All texts were broken into phonetic syllables and mapped to formant frequencies.
Four of five sacred chants scored higher in harmonic coherence than all three controls. The Bismillah scored 100% — every syllable-to-syllable interval lands on a harmonic ratio. The Lakota Sun Dance prayer, from a completely independent linguistic and cultural tradition, scored 83.3%. These traditions did not borrow from each other. They independently arrived at sound architectures that produce harmonic frequency patterns.
The Kyrie Eleison (42.1%) scored below two controls — notable because it is the most adapted to Western spoken vowel patterns. Latin/Greek phonetics do not concentrate energy the same way Sanskrit, Arabic, and Lakota syllable structures do. This is not a flaw in the test — it is a data point about how different phonetic systems carry harmonic information.
What this means: The formula works when given the right input layer. v1 failed because written characters are not the signal. v2 improved by adding structural depth but was still reading text as text. v3 treats text as encoded sound — which is what chanted traditions actually are — and the separation appears. The Deff formula did not change between versions. The representation of reality changed.
Three Lenses, One Formula
Each scanner version applies Deff = Dmax × Φ(C) to a different representation of the same source:
- v1 — Material Layer: What elements are present and how they are distributed. Valid as a frequency distribution tool. Cannot distinguish coherent from incoherent arrangements.
- v2 — Structural Layer: How elements are arranged in sequence across multiple scales. Detects when organization is disrupted. Cannot distinguish between types of organized content.
- v3 — Resonance Layer: How sounds interact harmonically when the text is voiced. Detects sound engineering — the acoustic coherence that chanting traditions were designed to produce.
These three layers may correspond to three levels of any signal: its composition (what is present), its structure (how it is arranged), and its resonance (how the arrangement interacts with itself). Each layer is a valid measurement. Each answers a different question. The formula is the same. The lens determines what it sees.
Control Testing & Methodology
Every version of this scanner has been subjected to adversarial control testing before and after publication. The v1 failure was caught internally, not by external critique. The correction was published the same day the flaw was identified.
- v1: Failed control test. Scrambled text scored identically to originals. Claims retracted.
- v2: Passed scrambling control. Structural disruption detected (up to 40% word-pair coherence drop). Cannot differentiate text types.
- v3: Passed cross-tradition control. Five independent chanting traditions scored an average of 74.4% harmonic coherence vs. 44.1% for mundane speech (+30.3 point separation). Further testing with additional traditions and controls is ongoing.
Full control test code, syllable breakdowns, and raw frequency data are available in the project repository. We publish what works and what doesn't. That is the standard.
Known Limitations & Open Questions
These are early results from a small dataset. We are not claiming this is settled science. The following limitations are acknowledged and are next in line for testing:
- Sample size: Five chants and three controls is a demonstration, not a study. Statistical significance requires a larger dataset — 20+ chants across traditions and 20+ diverse controls. This is the next phase of testing.
- Syllable boundary subjectivity: Different phoneticians may split syllables differently. We need to test whether the harmonic coherence scores hold when multiple independent analysts parse the same chant. If the score changes significantly based on who draws the boundaries, the method needs refinement.
- Formant bin resolution: The current method maps syllables to a small set of discrete frequency values. Real speech produces continuous formant distributions. We need to test whether finer frequency resolution (more bins) changes the separation between chants and controls, or whether the coarse binning is artificially creating harmonic ratios.
- Control diversity: The current controls are all mundane speech. Stronger controls would include poetry, song lyrics, military cadences, and other rhythmically structured non-sacred speech. If rhythmic speech scores similarly to chants, the scanner may be detecting rhythm rather than something specific to sacred sound architecture.
- Harmonic tolerance threshold: The current method counts a harmonic hit when an interval is within 0.15 of a whole-number ratio. The sensitivity of results to this threshold has not yet been tested. This needs to be varied systematically to confirm the separation is robust.
- The Kyrie anomaly: The Kyrie Eleison scored below two controls. This is reported as-is, not explained away. It may reflect genuine differences in Western vs. Eastern phonetic systems, or it may indicate a limitation in the method. Further testing will clarify.
This is where the research stands today. The data shows a 30-point separation between sacred chants and mundane speech on harmonic coherence. Whether that separation holds under rigorous testing is the open question. If you are a phonetician, linguist, musicologist, or acoustic researcher and want to contribute, we welcome collaboration.
Ongoing Research — February 18, 2026
The following experiments extend the v3 scanner results. All data is preliminary. Negative results are reported alongside positive ones.
Same-Language Control Test
To test whether high scores reflect the sacred content or just the language's phonetic structure, we compared sacred and mundane phrases in the same language:
| Language |
Sacred |
Mundane |
Gap |
Assessment |
| Arabic | 100% | 50% | +50 | Signal |
| Egyptian | 100% | 0% | +100 | Signal |
| Greek | 66.7% | 40% | +26.7 | Signal |
| Sanskrit | 100% | 100% | 0 | Language effect |
| Hebrew | 60% | 60% | 0 | Language effect |
| Aramaic | 50% | 80% | -30 | Reversed |
Mixed result. Three languages show clear separation between sacred and mundane. Two show none. One reverses. We cannot claim universality. Arabic and Egyptian signal appears real. Sanskrit and Hebrew may reflect the language's phonetic structure rather than the content. Aramaic reversal needs further investigation. All results reported as-is.
Temple Acoustics — 110 Hz Harmonic Coupling (Open Question)
Published peer-reviewed research (Princeton PEAR study 1994; Debertolis et al. 2015; Wolfe et al. 2020) shows that multiple Neolithic temples — Hal Saflieni Hypogeum (Malta), Newgrange (Ireland), Wayland's Smithy (UK) — resonate at approximately 110 Hz. The vowel formants that dominate sacred chants sit near exact harmonics of 110 Hz:
| Vowel |
Formant (F2) |
110 Hz Harmonic |
Distance |
| "ah" (as in father) | 1090 Hz | 1100 Hz (10th) | 10 Hz |
| "oo" (as in food) | 870 Hz | 880 Hz (8th) | 10 Hz |
| "ee" (as in see) | 2290 Hz | 2310 Hz (21st) | 20 Hz |
The vowels "ah," "oo," and "ee" — which dominate the highest-scoring sacred chants — have formants within 10-20 Hz of exact 110 Hz harmonics. A voice chanting these vowels in a 110 Hz resonant chamber would experience acoustic coupling: the room amplifies what the voice produces. This is not a claim of intentional design. It is an observation of physical proximity between measured temple resonances and measured vowel formants, both from published research. Verification requires spectral analysis of actual recordings made inside these temples. Such recordings exist (Reznikoff, EMA Project) but were not available for direct analysis at this time.
Sovereign Disclaimer: These tools are released for Planetary Calibration and the Simultaneous Construction of a trauma-free reality. They are intended to measure, not to control. To illuminate, not to extract. Use them with the same coherence they were built to detect.