What happens is that a task seems to be rescheduled (with a timerafter "now", the code is in the timerafter case), in such a way that the three interface calls that were just done seems not to be done. I can see this by the previous data set in one of the tasks being unmodified.
If I introduce short delays (with a timerafter "now+1000us" (which isn't terribly short)) at several strategic places places to (one or the other), I guess that in effect it causes a "yield" to happen, then the right thing happens.
There are four tasks and each of them have three client-started only interfaces between them. They could each have their core or two could be combined, the first needs longer delays to "get in synch". Two clients and two servers, connected in a "torus". Which means that all of them have two interfaces with a neighbour. It's to simulate temperature flow, and is not a typical application for XC/XCore at all.
All communication is done over this interface:
Code: Select all
typedef interface conn_if_t {
temp_degC_t set_get (const temp_degC_t temp_degC);
// FAIR and SYNCHRONISED and COMBINABLE! Will some times imply more than one polling per round trip:
bool poll_all_clients_seen (void);
} conn_if_t;
I am struggling with all kinds of problems, and have been rather active in issuing tickets to XMOS. But about this hypothesis of a race I though Id'd ask here first.
From my earlier pre-XC life anything that was solvable by introducing a delay in a concurrent system of tasks usually showed the presence of a race condition.
Is this possible for XC/XCore (not application)?