The XUF216 receives the MCLK and generates the microphone clock which is connected to said D-Flipflop. The flipflop is clocked by the MCLK.

It is my understanding that the flipflop reduces jitter on the clock output of the XUF216 but also delays it by one period of the MCLK. For a sampling rate of 48kHz the microphone clock operates at 3.072MHz and the MCLK at 12.288MHz.

I simulated that in Matlab and the plot shows that the point of time where the data is valid (green dashed line) is not inside the specified eye (black and blue dashed lines). As far as I understand this violates the setup&hold timing requirements. Can anybody verify this or show me where I made the wrong assumptions?

Simulation assumptions:

The xCORE I/O timing requirements (document here, Sec. 4.1) states that:

t_setup = 21.3ns (dashed black line in the plot)

t_hold = -11ns (dashed blue line in the plot)

The Infineon microphones (datasheet here here) specify the following timings.

Time to data line driven from clock edge (data output on the falling edge):

t_DD_min = 40ns (dotted green line in the plot)

t_DD_max = 80ns (dottend red line in the plot)

Time to data valid from clock edge

t_DV = 100ns (dashed green line in the plot)

Time to data HI-Z

t_HZ_max = 30ns (dashed red line in the plot)

Simulation plot: Matlab Code:

Code: Select all

```
close all
lineWidth = 1;
% All frequencies in MHz, all timings in us
% Frequency in MHz
f_mic_clock = 3.072;
f_src_clock = 12.288;
% Phase shift XMOS output to D-FF output
% via: t = phi/(360*f) <-> phi = t*360*f = 1/(12.288MHz)*360*3.072MHz
% of course in radians
phi_ff = deg2rad(90);
% Timings for infineon mic in reference desgin
% Time to dataline driven
t_dd_min_infi = 0.040;
t_dd_max_infi = 0.080;
% Time to data valid
t_dv_infi = 0.1;
% Time to data Hi-Z
t_dz_min_infi = 0.005;
t_dz_max_infi = 0.030;
% Hold time
%t_h_infi = 0.005;
% Sample rate of this simulation
fs = 1e6;
% Time vector
t = 0:1/fs:1;
% 12MHz signal with y offset for better visibility
y_12M = square(2*pi*12.288*t) + 3.5;
% XMOS 3.072MHz
y_3M = square(2*pi*f_mic_clock*t)+1.2;
% D-FF 3.072MHz
y_ff = square(2*pi*f_mic_clock*t-phi_ff)-1.2;
plot(t,y_12M, 'LineWidth', lineWidth)
hold on
plot(t,y_3M, 'LineWidth', lineWidth)
plot(t,y_ff, 'LineWidth', lineWidth)
% Lines for DFF Clock
% Mic t_dd,min
xline(1/f_mic_clock/2+1/f_src_clock+t_dd_min_infi, ':g', 'LineWidth', lineWidth+1)
%Mic t_dd,max
xline(1/f_mic_clock/2+1/f_src_clock+t_dd_max_infi, ':r', 'LineWidth', lineWidth+1)
% Mic t_dv
xline(1/f_mic_clock/2+1/f_src_clock+t_dv_infi, '--g', 'LineWidth', lineWidth)
% Mic t_h
xline(1/f_mic_clock+1/f_src_clock+t_dz_max_infi, '--r', 'LineWidth', lineWidth)
% Lines for XMOS Out Clock
% XMOS t_su
xline(1/f_mic_clock-0.021, '--k', 'LineWidth', lineWidth)
% XMOS t_h
xline(1/f_mic_clock-0.011, '--b', 'LineWidth', lineWidth)
hold off
axis([-0.1 1 -3 5])
xlabel('Time (us)')
grid on
legend("Source: 12.288M", "XMOS Output: 3.072M", "D-FF Output: 3.072M", ...
"Mic t_{dd,min}", "Mic t_{dd,max}", "Mic t_{DV}", "Mic t_{h}", ...
"XMOS t_{setup}", "XMOS t_{hold}");
title("Timing diagram for reference design")
```

**Edit:**

I made some measurements today and can confirm that the t_dv is about 45ns and thus meets the time requirements but does not leave much room for tolerances. Also the jitter on the mic clock is about +-3ns, which should make it negligible, so I wonder what the reason for using the D-flip-flop was....

The given t_dv = 100ns is a worst case time for a load of 100pF on the data line. Apparently this is not the case here.