Accessing another processes memory

New to XMOS and XCore? Get started here.
User avatar
segher
XCore Expert
Posts: 843
Joined: Sun Jul 11, 2010 1:31 am

Re: Accessing another processes memory

Postby segher » Tue Jun 11, 2013 5:47 pm

Ross wrote:And scaling to multiple tiles "just works" ;)
Yeah -- if you didn't need to lock most tasks to a particular CPU
because of the I/O ports, you would in principle not care at all what
CPU a task runs on (well, you might want to arrange things by hand
to get best performance, but... :-) )
User avatar
pstnotpd
XCore Addict
Posts: 161
Joined: Sun Jun 12, 2011 11:47 am

Postby pstnotpd » Wed Jun 12, 2013 10:04 am

Treczoks wrote:I know that you really love your channels at XMOS. But they are no cure-it-alls.
Basically, I'm just wondering why a "one process writes, N processes read" model is so actively discouraged by the toolset, and suspected a technical reason as in "A memory cell just written by process A cannot be read by process B in the next cycle" or something like that.
Isn't that why it's presented as CSP? I recall a presentation by David May about the drawbacks of shared memory in concurrent processing.

The tiny 64K is kind of a tell tale sign as well i.m.h.o.
Treczoks
Active Member
Posts: 38
Joined: Thu Mar 21, 2013 11:18 am

Postby Treczoks » Wed Jun 12, 2013 3:30 pm

Hi, Segher,

Thanks for your reply.
segher wrote:In XC separate threads ("things inside a par") are *completely* separate.
That is rather fundamental in the language design. It has the disadvantage
that you cannot share memory between threads; it also has many advantages
in how the compiler can manipulate your code, and prove things about your
code.
The optimisation advantages are obvious - not having "volatile" stuff makes register dispatch easier.
But what do you mean with "prove"? If you are talking timing analysis I'd say that memory accesses are way easier to time than channel IO - you never know if the channel blocks for some reason, and you're forced to set an execution time for every access that might work, without a guarantee that this really comes to pass.
segher wrote:It isn't very often something that gets in your way, because it is *faster*
to pass data via a channel than it is to pass it through memory.
If you reduce it on the single instruction of reading from a memory cell vs. reading from a channel, then you're right. If you have to build an infrastructure (A send to B: give me the data, B sends data back), I doubt it. Besides, it adds load to the writing process without need.

I do understand the reasons for and implications of keeping the threads independend. Nonetheless, the lack of shared memory leaves a heavy taste of incompleteness.

Yours, Christian Treczoks
User avatar
segher
XCore Expert
Posts: 843
Joined: Sun Jul 11, 2010 1:31 am

Postby segher » Wed Jun 12, 2013 4:23 pm

Treczoks wrote:The optimisation advantages are obvious - not having "volatile" stuff makes register dispatch easier.
What does this mean? We weren't talking about volatile, and I have
no idea what you mean by "register dispatch"?
But what do you mean with "prove"?
I meant both assertions that the compiler needs to be able to safely
do certain transforms; and things it wants to derive for the outside
world ("this code always runs in at least 16 and at most 20 cycles").
If you are talking timing analysis I'd say that memory accesses are way easier to time than channel IO - you never know if the channel blocks for some reason, and you're forced to set an execution time for every access that might work, without a guarantee that this really comes to pass.
Channel I/O is very predictable, too, as long as you know what code
is running at the other end of the channel (and you do). Channels do
not magically block, they do not have a temper.
segher wrote:It isn't very often something that gets in your way, because it is *faster*
to pass data via a channel than it is to pass it through memory.
If you reduce it on the single instruction of reading from a memory cell vs. reading from a channel, then you're right. If you have to build an infrastructure (A send to B: give me the data, B sends data back), I doubt it. Besides, it adds load to the writing process without need.
Doubt what you want; how about actually trying it out? Complaining
about things unfamiliar to you does not make you more familiar with
it, and neither does it change how well things perform.

Memory is good for data you want to put aside for a while. Channels
are good for data that you want to keep flowing. You usually want to
keep things flowing, if at all possible.
I do understand the reasons for and implications of keeping the threads independend. Nonetheless, the lack of shared memory leaves a heavy taste of incompleteness.
But you _can_ easily use "raw" memory; just don't use XC for that!
User avatar
Berni
Respected Member
Posts: 363
Joined: Thu Dec 10, 2009 10:17 pm

Postby Berni » Wed Jun 12, 2013 11:05 pm

Treczoks wrote: I do understand the reasons for and implications of keeping the threads independend. Nonetheless, the lack of shared memory leaves a heavy taste of incompleteness.
It is just that XC is trying to be as threading safe as it can. But this is why you can also combine regular C with it. So that you can do all the stuff that XC won't let you or run some C code you found online for some other MCU. There should probably be some compiler directive to tell it to ignore shared memory errors, but the shared memory trick i described in my earlier post only needs a few lines of C code to do, it looks something like this(I wrote it from memory so it might have a mistake)

Code: Select all

char *array;

void map_array(char src_array[])
{
    array = src_array;
}

char get_array(int index)
{
    return *array[index];
}
Treczoks
Active Member
Posts: 38
Joined: Thu Mar 21, 2013 11:18 am

Postby Treczoks » Thu Jun 13, 2013 10:10 am

segher wrote:What does this mean? We weren't talking about volatile, and I have no idea what you mean by "register dispatch"?
Ah, OK. If a compiler translates its source it has to keep tabs on which variables to hold in registers and which to keep in memory. If you have variables that could be altered by other events outside the current threads scope, you have to treat them as volatile - hence that keyword in C. The "register dispatch" is the part of the compiler that assigns registers (and memory regions) to (local) variables. Keeping the local thread as fenced in as XC does, only hardware registers have to be treated volatile, but memory can always considered "clean" from external manipulations.
segher wrote:Channel I/O is very predictable, too, as long as you know what code is running at the other end of the channel (and you do). Channels do not magically block, they do not have a temper.
Well, I do know the code running at the other end of the channel. But I also know that the other sides behavior depends on some IO that might happen - or not, or just not in time.
An example:
At the moment I have a bunch of threads dealing with the network - which should be the source of synchonisation for the whole system, but which might fail if the other side does not deliver, or the packet ogre happens to be hungry or whatever. On the other side I've the audio subsystem which has to run no questions asked, and which demands and/or provides audio samples within fixed a timeframe. While everything is running, I get my samples within the 1/48000 second from the net and can pass it on to the audio, but if I miss a packet, I can't just stop LRClk and BClk.
segher wrote: Memory is good for data you want to put aside for a while. Channels are good for data that you want to keep flowing. You usually want to keep things flowing, if at all possible.
Jep. Maybe I'll have to redesign some parts to make it more "flowy". If there only was a good documentation on channels - there are way too many things unclear, and the current documentation I've found ("Programming XC on XMOS devices", The XS1 library manual, misc. sources) does not really help once you want more than basic pipes. But that is another thread...

Thanks for your input!

Yours, Christian Treczoks
User avatar
segher
XCore Expert
Posts: 843
Joined: Sun Jul 11, 2010 1:31 am

Postby segher » Thu Jun 13, 2013 2:44 pm

Treczoks wrote:The "register dispatch" is the part of the compiler that assigns registers (and memory regions) to (local) variables.
That is called the register allocator.
segher wrote:Channel I/O is very predictable, too, as long as you know what code is running at the other end of the channel (and you do). Channels do not magically block, they do not have a temper.
Well, I do know the code running at the other end of the channel. But I also know that the other sides behavior depends on some IO that might happen - or not, or just not in time.
That doesn't make the code's performance any less predictable;
it just makes it badly designed code with sucky performance ;-)
segher wrote: Memory is good for data you want to put aside for a while. Channels are good for data that you want to keep flowing. You usually want to keep things flowing, if at all possible.
Jep. Maybe I'll have to redesign some parts to make it more "flowy".
Three common cases where you do need buffering, try to avoid
them all:

1) You need to be able to replay some data if it fails at first:
avoid this by doing all error recovery (and error checking) at the
very ends of your data flow, not ever anywhere in between.

2) You need to do some transformation on the data that requires
you to know a whole packet; for example, prepend the length of
the packet:
redesign your internal protocols. If you cannot avoid this because
of external constraints, do the buffering at the edges of your flow.

3) You really really really need to know the content of a whole packet:
see if you can use a different algorithm. This is hardest but least
frequent.
Treczoks
Active Member
Posts: 38
Joined: Thu Mar 21, 2013 11:18 am

Postby Treczoks » Fri Jun 14, 2013 9:05 am

segher wrote:
Treczoks wrote:The "register dispatch" is the part of the compiler that assigns registers (and memory regions) to (local) variables.
That is called the register allocator.
In your compiler. In none of mine ;-)
segher wrote:That doesn't make the code's performance any less predictable;
it just makes it badly designed code with sucky performance ;-)
I beg to differ. I'm resposible for my code, but not for the input I get from the outside. My code is designed to run, even if the world around me f*s up ;-)
segher wrote:Three common cases where you do need buffering, try to avoid them all:
Well, there is data that can be kept in the flow, and there is data that can't. I've both kinds to deal with. E.g. some of the "data that can't" is a set of parameters send to me within a network packet. Another process (SPI to/from other processor) needs to read from and write to some of the parameters. Just another process needs to react on the contents of some of these parameters to determine what to do with the flowing data.
Yes, in theory this can be done with channels, but it would be a nightmare to implement and maintain.

At the end of the day you'll need a screwdriver for screw problems and a hammer for nail problems. Banishing hammers for being "old-fashioned" and forcing everyone to use screwdrivers for everything has always been a bad approach.

Looks like I have to cross the language barrier to deal with this. Not the cleanest thing to do, but no technical problem either. Just inconvenient and inelegant. :-(

Yours, Christian Treczoks

Who is online

Users browsing this forum: No registered users and 2 guests