List
-
- Respected Member
- Posts: 395
- Joined: Tue May 18, 2010 12:25 am
-
- XCore Addict
- Posts: 169
- Joined: Fri Jan 08, 2010 12:13 am
Not quite sure what you are after here... XC has standard support for arrays:
If you want anything more complex than that (your request seems to have odours of java to it) then you need to create your own structure types to hold the additional information.
Code: Select all
unsigned int array[10];
for (int i = 0; i < 10; i++)
array[i] = i;
Paul
On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
-
- Respected Member
- Posts: 395
- Joined: Tue May 18, 2010 12:25 am
I meant List<T> as a Vector, or an expandable array. For example, a List<int> would be an expandable array of ints.
-
- Experienced Member
- Posts: 99
- Joined: Mon Dec 14, 2009 1:01 pm
XC doesn't have support for generics and there's no flexible list implementation. This kind of data structure is available in C++ which you can compile for the XCore. Otherwise, you could write your own linked list implementation (for example).
-
- Respected Member
- Posts: 395
- Joined: Tue May 18, 2010 12:25 am
Any recommendation of an efficient vector class to use in C++ on the XMOS?
-
- Experienced Member
- Posts: 94
- Joined: Tue Apr 27, 2010 10:55 pm
Do you really want to use a dynamic memory allocation and the overhead of a generic vector implementation on a 64k core?
Sure, it works. But possibly it's more efficient to use a block of statically allocated memory and e.g. create a linked list in it or use a part of it as a "virtually growing" array. Of course this depends from the application.
If you really need an STL implementation you could check http://www.sgi.com/tech/stl/ or http://sourceforge.net/projects/stlport/. But I'm sure that there must be smaller ones out there.
Sure, it works. But possibly it's more efficient to use a block of statically allocated memory and e.g. create a linked list in it or use a part of it as a "virtually growing" array. Of course this depends from the application.
If you really need an STL implementation you could check http://www.sgi.com/tech/stl/ or http://sourceforge.net/projects/stlport/. But I'm sure that there must be smaller ones out there.
-
- Respected Member
- Posts: 395
- Joined: Tue May 18, 2010 12:25 am
Is dynamically allocating memory really that bad? The program runs through some image data and picks out feature points, and adds it to a list. I guess I could do a static array and a counter.
-
- Experienced Member
- Posts: 94
- Joined: Tue Apr 27, 2010 10:55 pm
Not bad, but depending from the data it needs more code usually and probably more data for maintaining the memory pool.
It's no black and white thing, it really depends on the application. On small embedded systems some other points are important as on desktop systems.
When I statically allocate memory for 32 datasets but actually in most cases only 5 of them are used (e.g. your feature points, whatever this is), it looks like I wasted 27 of them. But could I really use this memory for other things in this situation if I used dynamic allocation?
Some applications must use dynamic allocation e.g. if the system behaves highly dynamical, e.g. in one situation I need 120 * A and 30 * B, some time later I need 60 * B and 300 * C and so on.
When I use dynamical allocation, how much memory do I actually need in the worst case (e.g. 32 datasets) including overhead?
Can I test the dynamic allocation? What do I do in the situation when the allocation of the 32nd dataset fails at runtime, e.g. because the system run out of memory? (With static allocation I get an error message at compile time already when the array with 32 datasets doesn't fit).
Hope this helps,
Thomas
It's no black and white thing, it really depends on the application. On small embedded systems some other points are important as on desktop systems.
When I statically allocate memory for 32 datasets but actually in most cases only 5 of them are used (e.g. your feature points, whatever this is), it looks like I wasted 27 of them. But could I really use this memory for other things in this situation if I used dynamic allocation?
Some applications must use dynamic allocation e.g. if the system behaves highly dynamical, e.g. in one situation I need 120 * A and 30 * B, some time later I need 60 * B and 300 * C and so on.
When I use dynamical allocation, how much memory do I actually need in the worst case (e.g. 32 datasets) including overhead?
Can I test the dynamic allocation? What do I do in the situation when the allocation of the 32nd dataset fails at runtime, e.g. because the system run out of memory? (With static allocation I get an error message at compile time already when the array with 32 datasets doesn't fit).
Hope this helps,
Thomas