MCU server implementation audio mix and relay #618
Replies: 2 comments 1 reply
-
You need an audio mixer. I've never done it myself but a quick search on Naudio and mixing looks like it should be possible.
Are you using a different WebRTC SDK for your Android client? I'm guessing yes since I am yet to hear of anyone having success attempting a .NET based iOS or Android client that can handle audio. The problems with the audio could be occurring due to the way you're fowarding the audio. I'd recommend checking the audio issues with only two peers. If that's OK and the issues occur when you add subsequent peers then it could be your mixing (or lack of). |
Beta Was this translation helpful? Give feedback.
-
One suggestion would be to skip OPUS for now and instead use PCM. That will simplify your decode/encode stage a lot. You get your calls to use PCM by using it as the only advertised audio codec in your SDP offer or answer. To employ an audio mixer the high level steps are:
You should also be synchronising the RTP timestamps across the participants before feeding into the mixer. To start with I would skip this stage. It should be good enough to get usable audio without such a synchronisation. You could look at this once you've managed to get the mixer working. |
Beta Was this translation helpful? Give feedback.
-
Hey, i'm trying to implement an MCU server for a POC, it only have to be able to get up to 5 people on call, but following the standards of an MCU, each participant will only have one peer connection open with the server, and basically i have
two problems
.First problem
I was able to connect 2 people on the server and relay the audio using the
SendRtpRaw(SDPMediaTypesEnum.audio, pkt.Payload, pkt.Header.Timestamp, pkt.Header.MarkerBit, pkt.Header.PayloadType)
method. But when the third connection enters the call, the packets are lost and the call goes wild.I know that i have to take all these audio streams that i receive on the
OnRtpPacketReceived
method, merge into just one stream and then send this stream to all the persons in the call. But i dont have much of an idea of how doing this.Second problem
I made an client using Angular to test my server while developing, but the actual objective is to create an native android app that connect to the server to comunicate with others, but the android client always crashes a few moments after connecting, and the native android library dont give me any clue of whats happening :(.
Also, on the few seconds while the android app still connected, the audio quality is very poor with a lot of echoing, while the web client receive an almost perfect sound.
If you can help me giving me an ideia of why this is happening and where i can start doing the audio mixing, i would be much obliged.
My code till this moment
Web client (Angular)
Web API
The web API that the mobile app is using
Beta Was this translation helpful? Give feedback.
All reactions