Parallella on Kickstarter

Off topic discussions that do not fit into any of the above can go here. Please keep it clean and respectful.
User avatar
Experienced Member
Posts: 75
Joined: Thu Feb 02, 2012 3:32 pm

Parallella on Kickstarter

Post by Gravis »

it's not xmos but i thought it would be interesting to see just what this chip can deliver.
it's got 32KB per core (which can be pooled at a penalty) and 700MHz cores. if nothing else, it will be an interesting chip to experiment with. there is only a day left to support it and they are short on funding.

Parallella on Kickstarter

XCore Addict
Posts: 133
Joined: Tue Dec 15, 2009 10:23 pm

Post by yzoer »

Yeah, Folknology posted this a bit back here: ... parallella

be interesting to see where it goes. It's a different audience / target market than xmos though. Similarly, have a look at (yet an entirely different beast) at . 144 tiny asynchronous cores!

User avatar
XCore Expert
Posts: 546
Joined: Thu Dec 10, 2009 10:41 pm
Location: St. Leonards-on-Sea, E. Sussex, UK.

Post by leon_heller »

They have reached their target! I signed up as soon as I heard of it four weeks ago, and couldn't see them getting enough subscribers a couple of weeks ago, judging from the rate of people joining. It looks like all the publicity that has been generated over the last couple of days has done the trick.
New User
Posts: 2
Joined: Mon Apr 12, 2010 11:48 pm

Post by Chuckt »

"If you think about it for a second, is not that fast. The Core i7-980 @4.5GHz get around 95 GFLOPs compared to Adapteva 16-core at 26 gigaflops. *(even PS3 can do 150GFLOPS, and it has PPC CPU)" ... ostcount=2
The problem with what Adapteva is claiming is neatly summarized by a blog post on the company’s own website. On September 7, Andreas Olofsson published a list of parallel processing efforts by different companies. According to him, “There have been some bright spots for application specific parallel processors with limited programmability, but the success rate of general purpose parallel programmable processors is an approximate 0%. I compiled the the following list to stay sober regarding our own chances to succeed as a parallel processor company.” ... -processor
To mine understanding this Parallella would share the same problems as a von Neumann CPU.
And that is because it's memory access is the limiting factor.

I have a design that does not have this problem but to scale it is uneconomic.
The Board space requirements and the number of traces would be mind boggling beyond 4 CPU's. ... order=&x=1

"Not only does this chip have to deal with the von Neumann bottleneck, it also requires software support to access its memory. The problem is the manufacturer is not supplying the necessary software. He isn't even supplying the libraries so one could easily program the chip!"
Respected Member
Posts: 296
Joined: Thu Dec 10, 2009 10:33 pm

Post by Heater »


Some rather scathing comments about the Adapteva Parallella project there.

Clearly phrases like "super computer" should be taken with a pinch of salt.

The Epiphany chip has been designed with a few constraints that are not applied to parallel efforts like a multi-core Intel CPU or the average GPU. Namely an emphasis on low power consumption for mobile applications and a rather limited transistor budget. The latter being due to the process technology available. The result of these constaints is that the cores are made as small as possible hence the need to ditch things like caches and division in hardware. Also the minimalist mesh communications set up.

That leaves you with a lot of floating point units with fast access to a small amount of local RAM and slower access to RAM of other units throught the mesh.

Of course the RAM bottle neck is always there. Scale this up to 1024 processors in a shared RAM system and without caches those processors would be endlessly waiting for each other to get RAM access. But in the Epiphany approach as you scale up you get more RAM as well, as long as you can arrange your algorthims to have a high locality of reference you are winning. This applies to parallel processors with caches, like your quad core PC, the only difference being that the local store is now managed in code rather than cache hardware.

Can we devise algorithms that make good use of the Epiphany architecture? Are there such algorithms that are useful. We shall see.

My impression is that all attempts at parallel processing have run into similar issues and ended up not being very general purpose as a result. Hence the 0% success rate refered to in Adapteva's survey of the scene.

As for the kickstarter project, I spent a weekend reading all the available documentation, history and plans and despite being aware of it's limitations pitched in 120 dollars. Why?

1) No matter how well or badly the Epiphany works out it will be an interesting toy to play with. I have no immediate serious applications coming to mind just yet. I have only recently become aware of OpenMP for creating parallel programs on muti-core processors so my interest was piqued.

2) Looking at the proposed board I concluded that it was worth $99 even if you never used the Epiphany chip on it. It has a Zynq-7010 Dual-core ARM A9 CPU with 1G RAM along with all the goodies you might expect on such an ARM board. Compare to the IGEP from ISEE at $180 or the Raspberry Pi at $40 or many others.

BUT the Zynq chip contains 28K programmable Logic Cells. That sells the board to me immediately as being the cheapest easist way to explore the Zynq chip.

3) Finally I liked the story of Andreas Olofsson and Adapteva, if only half of that is true they deserve a little support. Yeah, I'm a sucker:)
User avatar
XCore Legend
Posts: 1274
Joined: Thu Dec 10, 2009 10:20 pm

Post by Folknology »

What he said -> @Heater ;-)

That's why I pitched $120

Respected Member
Posts: 296
Joined: Thu Dec 10, 2009 10:33 pm

Post by Heater »


4) The chip exists in it's 16 core version and has been used on board designs by another company. So it's kind of proven and that "just" leaves the Zynq board design. Looking at the proposal that seems to amount to taking a reference design from Xilinx and removing all the stuff we don't need. All together not an unreasonable proposition.

5) Andreas Olofsson has big plans. The 64 core version of Epiphany. Even more cores in future. But here is the biggie, getting some ARM Soc builder to incorporate the Epiphany architecture into the ARM Soc itself and getting that out into phones and tablets etc. It's a long shot but we like big plans:)