Hi Jason,
Just a couple of ideas for the XE website:
1. Add link to XMOSLinkers archive.
2. Add link to David May's presentations and documents.
As a Newb it took me quite a while to find these valuable resources.
Regards, Roger...
XE Website suggestions
-
- Active Member
- Posts: 55
- Joined: Fri Oct 15, 2010 12:14 am
-
- XCore Expert
- Posts: 577
- Joined: Tue Sep 08, 2009 5:15 pm
By David May's presentations and documents - to what are you referring - there are quite a few videos and such (usually I post these along with other such interesting videos under the inspirational videos section).
Maybe we should start a wiki page for "useful resources, documents, and videos" or such. If you make it, I shall promote it to front of the wiki and add a few links to key posts eg make a sticky topic on getting started to help people new to XMOS out. This way it can be maintained by everyone as and when they find something really cool worth sharing that made them go "ah!"
:-)
Maybe we should start a wiki page for "useful resources, documents, and videos" or such. If you make it, I shall promote it to front of the wiki and add a few links to key posts eg make a sticky topic on getting started to help people new to XMOS out. This way it can be maintained by everyone as and when they find something really cool worth sharing that made them go "ah!"
:-)
-
- Active Member
- Posts: 55
- Joined: Fri Oct 15, 2010 12:14 am
Hi jason,
David has a bunch of good stuff here: http://www.cs.bris.ac.uk/~dave/
The wiki for links to good stuff sounds like an excellent idea!
Cheers, Roger...
David has a bunch of good stuff here: http://www.cs.bris.ac.uk/~dave/
The wiki for links to good stuff sounds like an excellent idea!
Cheers, Roger...
-
- Experienced Member
- Posts: 126
- Joined: Fri Feb 12, 2010 10:31 pm
I often feel like the wiki is underused; perhaps it would help to have a panel on the front page showing the most recently updated entry or entries? That might get people interested in codifying some of the things that have been discussed in the forums.
-
- XCore Expert
- Posts: 577
- Joined: Tue Sep 08, 2009 5:15 pm
Good suggestion Bsmithyman, I shall look further into how Mediawiki works (DB structure etc) and what the best way to go about this is given then the main site is Drupal (different system).
-
- Active Member
- Posts: 55
- Joined: Fri Oct 15, 2010 12:14 am
The Wiki has some good information, but I wish somebody would finish the "Programming ports in XC". I am struggling to find some good examples of programming ports.
-
- XCore Addict
- Posts: 146
- Joined: Thu Dec 10, 2009 10:17 pm
Good idea, bsmithyman
Jason, if the db interface is a pain, it might be easier to do an http query to google, using
" site:http://www.xcore.com/wiki "
This finds all pages spidered under the URL.
scrape & sort the result on date, store results in a file. Cron, hourly.
Just a thought. May find a script out there already.
--r.
Jason, if the db interface is a pain, it might be easier to do an http query to google, using
" site:http://www.xcore.com/wiki "
This finds all pages spidered under the URL.
scrape & sort the result on date, store results in a file. Cron, hourly.
Just a thought. May find a script out there already.
--r.
-
- XCore Expert
- Posts: 577
- Joined: Tue Sep 08, 2009 5:15 pm
I think DB would be far more efficient in this case - just need to take some time to sit down and look at it to see how all the tables link together.
The problem with scraping Google (whilst a creative solution) is the time delay in which it updates with new wiki content (and relying on 3rd party to keep result format same for parsing etc).
I think the time it would take to write that, I could just write a Drupal module which interfaces with the MediaWiki DB which will be much more useful in the long-run too. Thanks for the suggestion though :-)
EDIT: I am also loving how MediaWiki stores all of its content in DB as varbinary! It is like being in the Matrix looking at all the data. Fun times.
The problem with scraping Google (whilst a creative solution) is the time delay in which it updates with new wiki content (and relying on 3rd party to keep result format same for parsing etc).
I think the time it would take to write that, I could just write a Drupal module which interfaces with the MediaWiki DB which will be much more useful in the long-run too. Thanks for the suggestion though :-)
EDIT: I am also loving how MediaWiki stores all of its content in DB as varbinary! It is like being in the Matrix looking at all the data. Fun times.
-
- XCore Addict
- Posts: 146
- Joined: Thu Dec 10, 2009 10:17 pm
Well, I take your points, but I'm not sure there's that much delay, these days. Google's pretty aggressive at sucking up fast moving sites. And scraping is often easier than dealing with some of the database designs I see.
Either way, if you can get it done, it will help grow the wiki. That's the main thing.
--r.
Either way, if you can get it done, it will help grow the wiki. That's the main thing.
--r.