Numerous neurophysiological studies have shown that cortical oscillations play an important role in the segmentation, parsing, and coding of continuous speech streams by demonstrating brain rhythms track speech rhythms, in a process known as speech entrainment. Recent evidence suggests that this mechanism facilitates speech intelligibility. We recently demonstrated that this is the case for visual speech (lip movement) as well using MEG (Magnetoencephalography). This has led us to ask to what extent auditory and visual information are represented in brain areas, either jointly or individually. Which system conveys shared information from multisensory inputs, and which system represents the inputs synergistically? In my talk, first, I will present our recent work, which shows how information in entrained auditory and visual speech interact to facilitate speech comprehension. Here we used a novel Information Theory approach (Partial Information Decomposition: PID) to decompose dynamic information quantities, e.g., synergy, redundancy, and unique information. Second, I will show how the information interaction between audiovisual speech rhythms is represented as a function of time in different brain regions. Third, I will also discuss our recent results in the question of linking function to anatomy via diffusion tensor imaging that revealed the degree of white matter integrity in individuals differently predicts information interaction between audiovisual speech rhythms. Third, I will demonstrate how the brain processes high-level semantic gist (topic keywords) using Natural Language Processing (NLP)-based topic model algorithm and current development on the studies of speech development using MEG/OPM-MEG. Lastly, I will show preliminary data investigating how to augment cognitive performance using the rapid invisible frequency tagging (RIFT) technique in combination with human electrophysiological data.