|Summary:||ASTERISK-14530: RTCP jitter incorrect|
|Reporter:||Stanislaw Pitucha (viraptor)||Labels:|
|Date Opened:||2009-07-26 13:40:23||Date Closed:||2012-01-19 16:43:01.000-0600|
|Description:||RTCP IA jitter field seems to be calculated incorrectly. The code in res_rtp_asterisk says:|
rtcpheader = htonl((unsigned int)(rtp->rxjitter * 65536.));
Where (rtp->rxjitter) seems to be calculated based on local time (instead of rtp timestamps), so it's expressed in seconds. If that's true, then (rtp->rxjitter) should be multiplied by (rate) instead to get the proper value included in RTCP.
That's the case since 1.4 till the current svn-trunk.
|Comments:||By: Leif Madsen (lmadsen) 2009-09-03 15:03:49|
Acknowledging this issue due to the discussion on asterisk-dev: definition of RTP jitter - potential bug in Asterisk
By: Leif Madsen (lmadsen) 2009-09-03 17:56:00
From the mailing list:
> But let me ask a few questions: What does "jitter" even measure? I
> > had interpreted jitter to mean the difference (via some smoothing
> > mechanism) in milliseconds between the delays of subsequent packets in
> > a uni-directional stream. Why would frequency, or any codec-layer
> > computation be involved in this determination?
Well - as you write in your post, jitter depends on 2 values (when
calculating the D function as in RFC 1889 6.3.1). "Si is the RTP
timestamp from packet i, and Ri is the time of arrival in RTP
timestamp units". The RTP timestamp is incremented by the number of
bytes of RTP data sent in every packet. In that case it will usually
be incremented by 160 for a typical alaw call. Since we're using RTP
timestamps the whole time, it's going to be 8000 RTP units / second
(for that alaw call).
If you stick to calculating the value in units specified in the RFC,
you get the jitter in those units and you don't have to care about the
"real time" at all. S is the "perfect" time for arrival, R is the
arrival timestamp. The rate only matters for presenting the result in
ms. But the jitter of 1 unit (byte) for a codec at 8kHz is different
in real time than the one for a codec at 16kHz.
Asterisk is calculating D not based on the timestamps, but time, so
the result is in time too. In res_rtp_.../calc_rxstamp():
tv->tv_sec = rtp->rxcore.tv_sec + timestamp / 8000;
tv->tv_usec = rtp->rxcore.tv_usec + (timestamp % 8000) * 125;
(time = time + timestamp/rate which means time + time)
Which is ok for a/ulaw and less ok for other codecs.
current_time = (double)now.tv_sec + (double)now.tv_usec/1000000;
transit = current_time - dtv;
You're using the real time and not the timestamps for jitter, so you
get jitter in real world time assuming rate=8000.
You can check the value with Wireshark - there was a very nice post
describing how the jitter is calculated on their ML (can't find the
link now unfortunately). If you multiply by the current rate instead,
the results match.
So I see 2 solutions. You can either leave the calculation as is and
multiply by rate to get the result in units (the one that should be in
the rtcp packet), or change the code to get:
tv = time*rate + timestamp
transit = current_time*rate - dtv
and then rxjitter will be in rtp units.
Stanis?aw Pitucha, Gradwell Voip Engineer
By: Leif Madsen (lmadsen) 2010-04-14 09:50:51
Pinging you about this issue as it may be of some interest to you regarding the work you're doing right now.
By: Olle Johansson (oej) 2011-02-14 07:13:51.000-0600
Ping. This is interesting. I've realized that Jitter is all wrong in my pinefrog work and this issue gives a lot of good feedback I had not seen before. Thanks.
By: Luke H (luckman212) 2012-01-25 08:53:46.312-0600
Olle- are you still working on that pinefrog branch? I'd love to play with it but the original link from ASTERISK-16404 seems dead. Just wondering if anything is happening anymore with Asterisk + RTCP. cheers