Github vs XMOS

Technical questions regarding the XTC tools and programming with XMOS.
User avatar
rp181
Respected Member
Posts: 395
Joined: Tue May 18, 2010 12:25 am

Github vs XMOS

Post by rp181 »

Is it better to use the software components (such as UART) from the XMOS site or github?

Also, on the github page says up to 115.2k baud. I know earlier I managed to push it up to 3 million baud with the XMOS one, with minimal data loss (negligible in my application). Is the same possible with the github one?


User avatar
lilltroll
XCore Expert
Posts: 956
Joined: Fri Dec 11, 2009 3:53 am
Location: Sweden, Eskilstuna

Post by lilltroll »

I thought exactly the same about the GITHUB implementation last night, since I use 2-3 MBit as well with help of the port timers.

The solution, what about that we push a "fast UART" example to the repo, so everyone can share it.

The one of our solution that uses less instructions for the job wins ;)

Just add your thoughts here
https://github.com/xcore/sc_uart/issues/2
This link may be deleted in the future
Probably not the most confused programmer anymore on the XCORE forum.
User avatar
rp181
Respected Member
Posts: 395
Joined: Tue May 18, 2010 12:25 am

Post by rp181 »

http://www.xmos.com/applications/basic-io/uart?ver=all

This is the one that I achieved that speed (I believe), and hasn't been updated for awhile. If someone put this up on github, then I am sure some people will eventually optimize it. I'le look at it, but I don't think I am in a position to make such a program. :)

EDIT: So the one linked is very simple. The only way I can see to improve it is using a clock block instead of the manual timing. The problem I had when I did the high speed was every so often, a byte would be missing. What was weird was that every interval (every x bytes) a byte would be missing, making me believe it wasn't this code, but my host.
User avatar
lilltroll
XCore Expert
Posts: 956
Joined: Fri Dec 11, 2009 3:53 am
Location: Sweden, Eskilstuna

Post by lilltroll »

rp181 wrote:http://www.xmos.com/applications/basic-io/uart?ver=all

This is the one that I achieved that speed (I believe), and hasn't been updated for awhile. If someone put this up on github, then I am sure some people will eventually optimize it. I'le look at it, but I don't think I am in a position to make such a program. :)

EDIT: So the one linked is very simple. The only way I can see to improve it is using a clock block instead of the manual timing. The problem I had when I did the high speed was every so often, a byte would be missing. What was weird was that every interval (every x bytes) a byte would be missing, making me believe it wasn't this code, but my host.
You must ensure that the sampling of the signal happens at the correct position an do not drift away.

I guess that you use the FTDI chip, it uses a 6 MHz clock.
A XMOS port can create exactly 2 MHz running @ 400 MHz (100MHz/50), or 3 MHz running at 384 MHz (96MHz/32). It works bit perfect at those 2 examples.

With clockdiv() it becomes this easy without any instruction for a delay.

Code: Select all

p_TXD	<: 0; //start bit
		byte=(int32,char[])[j];  //Place the code for fetching your byte here to send
#pragma loop unroll
		for(int k=0;k<8;k++)
			p_TXD <: >> byte;
		p_TXD<: 1; //stop bit
Probably not the most confused programmer anymore on the XCORE forum.
User avatar
rp181
Respected Member
Posts: 395
Joined: Tue May 18, 2010 12:25 am

Post by rp181 »

Yes, I am using the FT232. I am still waiting on some parts so I can't test anything quite yet. How do you know it is "bit perfect"? I am using a 500 MHz L1, so how I would go about figuring out "perfect" frequencies?

Also, this:

Code: Select all

for(int k=0;k<8;k++)
         p_TXD <: >> byte;
is interesting, didn't know you could do that to spit out the bits. That should help out quite a bit soon!
User avatar
lilltroll
XCore Expert
Posts: 956
Joined: Fri Dec 11, 2009 3:53 am
Location: Sweden, Eskilstuna

Post by lilltroll »

Yes they included a special operation in the ISA for bit-banging. Shift and out in the same OP.

You could always apply an CRC32 to a chunk of data and check it on the host.
100% is a "unreal" statement.
Myself I send my data to MATLAB and I would see it it would be corrupted. But of course I haven't sent an infinity amount of data so I have no proof. But it makes sense.
If you try to read 10 bits of data with more than 10% of frequency diff, you will end up wrong in the end of the byte.

I do not remember but doesn't the port clock at 100 MHz on a 500 MHz device as well ?
Probably not the most confused programmer anymore on the XCORE forum.
User avatar
phalt
Respected Member
Posts: 298
Joined: Thu May 12, 2011 11:14 am

Post by phalt »

The github components are updated far more often and I think the road map will be to eventually have all our software components on github.
User avatar
dan
Experienced Member
Posts: 102
Joined: Mon Feb 22, 2010 2:30 pm

Post by dan »

I'm afraid the github uart is a little confusing. Its not implemented for maximum speed and efficiency! Its actually implemented to be more easily composable into threads that are also doing other stuff, since typically a uart is not going to use a whole thread in a design destined for high volume production.

This is confusing and we're in the process of overhauling all the low-speed serial components we offer on github and xmos.com. High speed uarts will be part of that mix (probably), as will a quad-uart in one thread. We're also looking at how we can make components which offer combos of serial busses (e.g. uart+I2C+SPI) in one thread (which is also quite tricky in practice if they are all running asynchronously).
ozel
Active Member
Posts: 45
Joined: Wed Sep 08, 2010 10:16 am

Post by ozel »

the xlog module can run at 921600 Bauds and with it, printf() can be directed to an arbitrary 1bit port. but sometimes I have wired problems with this... unfortunatelly I use G4s, where there is no Xscope support, yet.

Anyways I'd be very interested to see, how a generic 2Mbit (or more) UART RX code would look like, that is perfectly synchronized to every incoming start bit. With timerafter() it looks, like it drifts too much (which results in missed bytes, just as you all mention it). I'm not shure, but I guess oversampling on the RX pin could be the only valid option, like explained here http://github.xcore.com/doc_tips_and_tr ... y-sampling. I don't understand it completly, though.

rp181 and lilltroll, what does your modified UART RX code look like?
User avatar
lilltroll
XCore Expert
Posts: 956
Joined: Fri Dec 11, 2009 3:53 am
Location: Sweden, Eskilstuna

Post by lilltroll »

ozel wrote:the xlog module can run at 921600 Bauds and with it, printf() can be directed to an arbitrary 1bit port. but sometimes I have wired problems with this... unfortunatelly I use G4s, where there is no Xscope support, yet.

Anyways I'd be very interested to see, how a generic 2Mbit (or more) UART RX code would look like, that is perfectly synchronized to every incoming start bit. With timerafter() it looks, like it drifts too much (which results in missed bytes, just as you all mention it). I'm not shure, but I guess oversampling on the RX pin could be the only valid option, like explained here http://github.xcore.com/doc_tips_and_tr ... y-sampling. I don't understand it completly, though.

rp181 and lilltroll, what does your modified UART RX code look like?
I use the TX in sc_adaptive_filter, download and see. It send an array of int32, e.g. 4*n bytes.
Probably not the most confused programmer anymore on the XCORE forum.