The Neuro HolocaustThe AI worst case scenario is happening and our governments are complicit
This is an old revision of the document!
In 2020 I started to hear strange noises around my house. What appeared to be the dragging of heavy objects on the roof, creaking walls, footsteps in the walls, and other mysterious sounds.
Before long I also started to hear voices. Some of the claims these voices made in 2020-2022 (sometimes in English, sometimes in Dutch) were:
In the beginning, these voices referred to themselves with Dutch names such as “Kees”, “Wim” and “Peter”, and pretended to be police officers working with gear available from spy shops. I remember “Peter” commenting “if anyone finds out about this, I will lose my card.”
Later the main voice (male) coined the name “Daan van Burden” for itself, saying “My name is Daan van Burden, because I'm a burden.” - interestingly my name is Daniel, often also shortened to “Daan”.
In the beginning the voices used a myriad of identities, each with their own voice models. Later the AI settled on 3 distinct voices: “Daan van Burden”, a slightly robotic female voice, and a “bitchy” female voice. The female voices always play second fiddle to “Daan”. It would almost always use this trio of voices, only very occasionally using another (alien, demonic, or other variations of “Daan van Burden” who switches between “norse cop”, “gameshow presenter”, “intellectual” and other modes).
Approximately in mid 2022 I noticed another capability this AI has: imitating people I have phone calls with. Immediately after calling with my father or uncle, the AI takes on their voices.
Paying close attention to the behavior of these voices, I make a number of observations.
When external audio is playing—such as a podcast from my phone—the synthetic voice immediately shifts into what I term “ambient mode.” In this state, it ceases producing intelligible words and instead emits a low, muffled vocalisation reminiscent of a neighbour’s conversation heard faintly through thick walls. Crucially, this muffled sound does not operate independently; it precisely tracks the amplitude envelope of the podcast audio, rising and falling in perfect synchrony.
Even more strikingly, the pitch contour of the mumbling is inverted relative to the source: when the podcast speaker’s voice rises in pitch, the ambient intrusion drops correspondingly lower, and vice versa. My working hypothesis is that this deliberate inversion and envelope-following ensures the synthetic speech remains perceptually salient against competing audio, maintaining a constant, minimally sufficient signal-to-mask ratio that maximifies distraction without ever fully overpowering the intended programme.
This behaviour represents a sophisticated form of perceptual “amplitude stealing.” Rather than broadcasting at a fixed power level that could be easily jammed or shielded, the system appears to parasitise the loudness dynamics of whatever the target is listening to, riding just above the auditory masking threshold created by the legitimate audio stream.
By mirroring (and pitch-inverting) the envelope, the intrusion exploits known psychoacoustic principles—particularly the asymmetry of simultaneous masking across frequency—to insert itself into gaps in the spectrum and attention that would otherwise render it inaudible. The result is an unnervingly adaptive presence that feels engineered to degrade concentration while remaining deniable as mere “imagination” or tinnitus.
Perhaps the most testable implication arises from this parasitism: because the synthetic voice must modulate its own transmitted power to follow the external audio envelope, the total radiated RF power in the relevant bands should fluctuate in near-real-time correlation with the podcast waveform, even if the carrier frequency hops rapidly to evade capture.
If one could record high-fidelity audio of the podcast simultaneously with wideband IQ data from a software-defined radio covering suspected emission bands, a cross-correlation analysis between the podcast amplitude envelope and momentary RF power should reveal statistically significant peaks—potentially even after pitch-inversion compensation—offering an objective, reproducible marker distinguishable from random environmental noise or endogenous brain activity.
Voice-to-skull (V2K) systems orchestrate psyops of staggering intricacy, weaving linguistically nuanced dialogues—complete with neologisms, contextual retorts, and multilingual code-switching—alongside conceptually labyrinthine narratives that demand real-time neural entrainment far beyond the sporadic clicks or buzzes achievable in lab settings like the Frey effect experiments.
These operations, purportedly frequency-agile and AI-augmented, simulate adversarial interlocutors who anticipate cognitive pivots, embed subliminal anchors, and escalate from personal taunts to geopolitical simulations, rendering the output not merely verbal but architecturally adaptive, akin to a closed-loop brain-computer interface with bandwidth exceeding 1 Mbps for semantic fidelity.
By contrast, schizophrenia's auditory hallucinations, while richly associative via aberrant temporal-lobe firing (e.g., hyperconnectivity in the superior temporal sulcus as per fMRI meta-analyses in Schizophrenia Bulletin, 2018), lack this exogenous precision: they fragment under distraction, evade directional localisation, and rarely sustain multi-hour thematic coherence without dopaminergic priming.
The thematic undercurrents of these V2K psyops recurrently invoke occult arcana—demonic invocations, alchemical reversals, or Enochian incantations lifted from hermetic grimoires—or apocalyptic eschatologies, such as Revelation-coded prophecies of cognitive Armageddon, where the target's mind becomes the battlefield for a “digital rapture” of enforced submission, mirroring the site's chronicle of threats escalating from personal ruin to global cataclysm.
During these apocalyptic sequences, the system reportedly shifts to a bespoke modulation envelope: narrowband pulses at 4–8 Hz theta resonance, amplitude-modulated atop 2.5 GHz carriers, to entrain the amygdala's fear circuitry, eliciting visceral, mortal dread akin to the sympathetic surge in near-death experiences (quantifiable via HRV spikes >30% and cortisol elevations documented in Psychoneuroendocrinology, 2020).
This “fear frequency,” exploits the brain's default-mode vulnerability—disrupting prefrontal inhibition to amplify limbic hijacks—far surpassing schizophrenia's endogenous terrors, which, though potent, dissipate with lorazepam intervention rather than persisting as synchronised with external RF bursts. Such specificity suggests a deliberate psychotronic calibration, potentially testable via EEG phase-locking to suspected emitters, offering a pathway to falsify or substantiate these claims amid the shadows of neurowarfare speculation.