How is it possible to achieve real 5.1 with headphones?

Generally, the direction of sound is determined by human auditory system inside the brain. The most crucial information for it to do so, are the affections of the different parts in both ears and body, which will alter the system response of the ears relative to the direction. It is important that both ears are involved in this localization process. The ear that is closer to the sound source is usually referred as ipsilateral (same sided) and the shadowed one as contralateral. One of the most important localization cue is the way contralateral ear relates to the ipsilateral ear, as there willl be both spectral and timing differences when compared these two at auditory cortex.

The human auditory system also involves head-movement for preciser localization, as the movement is one of the most important factor in front/back separation. Take an example where the sound is coming directly from behind the listener. If the listener turns his head to the right, the right ear will get the sound little before the left one. However, if the sound came from the frontal side instead, the left ear would have received the sound before the right. This is a oversimplified example, but it demonstrates that even the smallest movement will give "affirmation" of where the sound is really coming from. After the brains learns the Head-Related Transfer Functions (HRTFs) and correlation with head-tracked auralization, the sound directions are easily heard even when the head is still. The dynamic process involving head-movement compensation could be viewed as a "learning process" of the HRTF model currently used in auralization.

What distinguishes HeaDSPeaker from other headphone "surround" algorithms is, that it will give the listener a mechanism to actually hear the sound from the direction it is meant to, instead of trying to colorize the sound with mere HRTFs. This is achieved by dynamical (reacting to the head-movement) algorithms and lag-free cross-correlation modeling of the binaural sound. It is not enough to filter the sound with HRTFs, it must also be done properly with the current head posture.

If only directions are auralized of the multichannels audio, the sound still is localized very close to the head. It could almost appear to be inside the head, especially with center channel in  the situation where the head is straight. Because of this, the sound must be also given some distance by modelling the distance cue mechanism of the auditory system. With HeaDSPeaker this is done with subtle auditorium render model (not just reverb or echo effect), which also reacts dynamically to the movement of the listeners head, just like in real life.

Here is a simple peek to the internal process of the HeaDSPeaker DSP. A DSP unit with Transmitting beacon (lower right) constantly has access to the angular information of the listeners heading angle. This angle is given back to DSP for sophisticated azimuth angle processing, which will update the auralized directions of the 5 loudspeakers each of which are in different direction relative to the head. The mulichannel information is the auralized and rendered by DSP with revolutionary HeadDSPeaker auralization algorithm, so that the listener  will hear whether the sound is for example coming from behind or front. The angular update process is being operated at 820 times per second, which is hundred times faster than the competitive technologies. This way the perceptual sound image does not suffer from lag or jumping and "teleporting" speaker positions.