SyntaxHighlighter

Friday, 13 September 2013

Day 64: The beginning

All good things must continue forever... or something like that.

Today I did some documentation, I implemented the IMAP and mbox tutorials inside the wiki itself which required the uploading of screenshots. It turned out file uploading hadn't been implemented when the Wiki was set up, which was soon rectified via Steve and its now all done.

The wiki looks good.

Also troubleshooted some printing stuff with John Latham. And we noticed that when logging into your Unix account or logging in on My Manchester, it doesn't matter whether you use capitals or not in your password - it makes absolutely no difference. Very strange and somewhat worrying too, makes one wonder whether they are sending our password over the network as plain text... I've clearly spent too long with John.

My, what a summer this has been! From Monday I'm back to being just a regular student, one of gazillions coming to the Uni for Welcome Week. This is going to be a fun year, I can feel it!

I wrote a 300 word summary of my experiences for this vacation job, apparently to go on the school website. Here it is, I think it aptly describes how I have felt about the past 13 weeks:
Frankly, it's been a whole lot of fun! This experience has been the polar opposite of dull. There is new stuff coming up all the time that keeps every day unique, events and jobs of a huge variety to do alongside the core task which makes for a colourful, interesting, and exciting work experience. This place is massive, so much goes on it boggles the mind. I've done programming, documenting, porting, debugging, testing, blogging, cabling PC's, troubleshooting PC's, buying PC components, building PC's, upgrading PC's, swapping components about, formatting PC's, reinstalling OS's, managing email clients, IMAP, brushing up all manner of skills (.Net programming, networking, embedded development,  Python, shell scripting, Visual C#, XNA, OpenGL to name a few), manning demos at events, attending meetings to discuss and decide immensely important stuff, playing with the very best 3rd year projects of all time, moving 42" televisions and all manner of super high end computer hardware about, photography, modelling for photography, barbecuing that I didn't actually attend, and general tomfoolery when the occasion called for it. I’ve honestly enjoyed every moment of my time working this summer, one of the best ever. 

I would recommend it if you want a fun summer whilst earning. You get to interact with your lecturers, professors and people behind the scenes on a daily basis. Great opportunity to get to know the important academics especially if you are planning to do further study at the University later on (masters, PhD).

In addition all the work you do has an impact. Often in industrial internships the work you do is barely helpful, you aren’t important for the work you put in but for other reasons. Whereas working as a vacation student the whole reason you are here is first and foremost to complete a task that the department needs done (and a million other subtasks). You are important and the work you do is important - nothing is contrived or just to keep you busy, which really helps in motivation and enjoyment.

In short, it has been a tremendously positive experience.

I would like to thank Toby Howard for being a fantastic supervisor. I can't fault him in any way, him and his team (but mostly him) and everyone I've interacted with over the course of this work (included but not limited to Andy Wise, JTL, Graham Gough, Steve Pettifer, Tony Curran, of course Toby, Ruth Maddock, the hardware guy walking about, Gavin Brown, Rina Srabonian and a whole bunch more) have made this summer a real success in so many ways. Words fail me.

Goodbye.

Day 63: Wiki

Finished off the tutorials for the wiki, they are linked into it and everything is set up.

Did a load of documentation.

Alerted Tony to the fact that there was mistake in the MAC logging for a few machines which would potentially have resulted in about six of the computers in LF31 freezing intermittently and being generally horrible to use. All fixed now though.

Did some other spontaneous tasks, all looking good everything is coming together nicely.

Wednesday, 11 September 2013

Day 62: Three days to go

Three days to go, wanted to finish LowryDemo today, managed to get a good chunk of it done but it's now 1am and there's a lot left.

Documentation is one of those things where you need to curb your perfectionist tendencies and not overload information into it. The most effective documentation is short and simple, so that's the direction in which I'm trying to mould my documenting habits.

It's going rather well but taking a little longer than I expected :)

Fixed a few things I noticed in ExportJPEGs and LowryDemo as I documented them, polished off some loose ends and made the complicated bits of the code tidier.

Also spent some time moving bits of code about to where it really belongs. The fact that I'm categorising different areas of code really highlights places where I've put code that doesn't belong there, so I'm quite enjoying perfecting the programs.

Till tomorrow!

Day 61: And into LowryDemo we go

Started LowryDemo documentation. It's difficult to get your head round such a massive program and then try to work out how to even structure the documentation, but it seems to be taking shape slowly.

Finally got the IMAP working fully, updated the guides and tutorials and the wiki. Published the google doc tutorials for mbox importing and for setting up Thunderbird IMAP, and then linked them into the wiki. Also made pdf's of them and sent those to Graham so he can consider putting them onto the wiki in place of the google docs because the google stuff is obviously hosted on Google's servers which is less than ideal.

It turned out the way Thunderbird works is that you need to only enter the server name in the IMAP and SMTP settings boxes, and then let it auto-detect the rest of it. It actually uses different ports and protocols to what Outlook was using.

In the email field you need to enter your user ID that can be found on outlook, mine was firstname.lastname@student.manchester.ac.uk

In the username field you enter your Campus ID, e.g. mbxxxxxx, and in the password field your Campus password.

All done, now to concentrate on documentation! Aiming to finish it by tomorrow.

Monday, 9 September 2013

Day 60: Blogging!

Forgot to do my blog posts lately - working from home does throw some simple habits off kilter as it were!

Continuing documentation, aim to finish ExportJPEGs today at the very least and make some headway into LowryDemo.

.. And that's what I did. ExportJPEGs documentation finally finished! Only about 10,000 words. Time to sleep, bye :)

Day 59: Working from home

.. and did a bunch of documentation!

5000 words and not even finished ExportJPEGs, I think it's a bit too detailed so may start summarising stuff from now on to finish everything in a day or three. Only a week left till University starts off again!

So far I've been adding a lot of detail, with the intention to make it so that the user can read the doc and understand basically everything that went on in my head when coding the programs and will barely need to look at other sources to understand how it works. But this is slowing the documentation down, so from now I will just do the normal thing and refer the user to other documentation when necessary, and just give the basic framework of how and why things do what they do. This will leave a lot more scope for the user to teach themselves.

Did a few other menial tasks messing about with Office 365 and email and whatnot. All good fun!

Day 58: Completion of cabling

Cabling all completed.

Did a few hours of it yesterday, finished half the room. Came today and finished the rest, looks quite pristine now - hard work coming to fruition and all that.

Three of the desk cable tidy hole slot covers cannot be found, so three desks are looking a bit messy.

One of the desks didn't have the computer set up in it yet, so I haven't done that one but apart from that all is done.

Logged onto the Linux machines in the lab and sorted out the Thunderbird tutorials. Office 365 mail verification is still broken so I couldn't finish the IMAP guide, but I added some info to the wiki:
http://wiki.manchester.ac.uk/compsci/index.php/It.changes


Wednesday, 4 September 2013

Day 57: Only a week and a half to go!

Time management

Time is short, so I've been increasing efficiency. I want this documentation finished as soon as possible. I've been missing some hours a day at work the past week or so (have been working 6 or 5 or even 4 hour days - I've been busy with my sisters wedding, which is now finally over), therefore I have a few extra hours to work overtime to make up, so I plan to work a lot longer than usual this week, and probably into the weekend.

Annoyingly I also need to divide up my time for my MEng report, but it will be manageable.

And I've also decided to work at home for most of the next few days, only coming in to get the cabling done in LF31. I can afford to do this now because documentation doesn't really require much rebuilding and testing of the AR, so my home PC has everything I need. This will save a lot of time in travelling and eating and all sorts of other time-eating activities that I normally do, which is great.

I will need to be in the School to test and run some things though, get screenshots etc, until I fully install the AR onto my home machine and find myself a webcam, but I'll be sure to list up everything I need to do beforehand and stop by my office to get it all done efficiently at one sitting when I'm already in Kilburn to do the cabling.

Scratch that - change of plan

Just talked to Toby, decided I'll get the cabling over and done with and then work from home thereafter.

Spent the afternoon cabling, finished half of the whole room. All I can say is that for about half the room, frankly whoever did the cabling there is now my sworn enemy. I have never seen so many shoddy mistakes - network cables just thrown over the desk instead of going down into the tidy hole and out the other one, the USB extension that users will need to swap often being stuffed down the tidy hole and sealed there by using a cable tie in completely the wrong place so they are completely inaccessible, using a cable tie on every single cable around the tidy hole which is frankly horrible, being unpredictable and simply not consistent, and so on.

Sorted it all out now. The other half of the room seems to be much more sensibly cabled (I presume different people did each half of it), and now I'm looking forward to finishing everything tomorrow morning.

Out of interest, takes me 3 minutes a desk on average (provided the cabling isn't too bad, just replacing tie wraps and sorting out cable positioning etc - when I didn't have to lie down under the desk trying to feed horrendously placed cables through tiny holes to their proper positions or fix other horrendous cabling mistakes). I cut out the cable wrapping plastic and put one of them along with one cable tie onto each desk beforehand. Also cleaned up all the left over cut cable tie bits into a bag, from my own cutting and the load that are strewn over random desks from the previous cabler; will hopefully save someone having to go through everything afterwards (well to be honest someone probably will still have to go through everything afterwards, but at least it will save them some time).

I realised after about 3 rows that I was using too little of the cable wrapping plastic, it didn't really reach end to end. I increased the length for the rest of the desks though, looking much better on the left side of the room. I'll probably replace this on the ones that are too short, there seems to be a lot of the stuff left over.

Day 56: Cabling and Documenting, day one.

The documentation continues, and Pi cabling is underway!

The cabling is pretty simple, ensure the cable ties are correctly done and correct them if not, apply the plastic stretchy translucent cable tidier things to the two cables going from the back of the monitor (USB and power or something) to the cable hole in the desk; apply the cable hole cover thing and slot the two blue cables (network and USB) out of the far slots, the power and HDMI out of the front slots, and the two aforementioned monitor cables out the right slot.

The four cables apart from the two monitor ones (i.e. usb, hdmi, network and power) go through the cable ties.

All set! Only about 80 to do, got through about 5 today before turning in, will continue in earnest tomorrow.

Continued documentation, I have a pretty solid structure in place now, Divided each program into functions - so for ExportJPEGs that would be something like:
  1. GUI Handling: creating, displaying and managing the GUI and its functions
  2. Server Polling: monitoring the server for new models, and copying them to the incoming directory
  3.  Directory Monitoring: monitoring the incoming directory for new models, then loading them for processing
  4. Model Processing
  5. Exporting GIFs: exporting JPEGs, converting them to a gif, and uploading the gif to the web gallery.
Then there is a section for each one, which first describes the aim of the function, then lists and explains in bullet points the global variables used, and then lists and explains in bullet points every method which contains code relevant to the function.

All looking good, I've pretty much done Server Polling and Directory Monitoring.

Tuesday, 3 September 2013

Day 55: The return of the Howards.

The Documentation

Continued documenting the mass of excellence that is ExportJPEGs and LowryDemo, improved the intro a  massive amount, started off a basic structure for the general code explanations and got stuck right into it.

Toby is back! Some cheerful reunions ensued, at one point I'm sure I heard guitar music of some sort coming from his room in the afternoon :)

And I'm now convinced that Toby is an excellent photographer. These aren't just images that are pleasant to the eye, rather after browsing his new Japan August 2013 photo album I realised that each and every one of those captures serves to work together to portray the real essence of Japan. I now feel as if the beautiful country is my long lost home that I have only the fondest memories of, with the images tentatively prodding and stirring those feelings into an inner desire to experience it for myself... now, if Toby worked at a travel agency's my flight would be booked.

Tomorrow I may be helping out with some Pi cabling in LF31, a nice change - some manual labour which would be a welcome respite from the mental stresses and stressful mentals of documentation.

Ahem, till then.

Friday, 30 August 2013

Day 54: Friday

Tuned up the mbox file importing on Thunderbird tutorial (https://docs.google.com/document/d/1voSf8PuXz7jo-vVYOxHayYHU5lzqZCRysqrN-a46MfA/edit?usp=sharing), added Windows and Linux specific sections.

Continued documenting the AR system.

Finished overview of ExportJPEGs, the hard part starts now - detailed explanations :)

Thursday, 29 August 2013

Day 53: Doc

Documentation

Finished off working on Lowry Demo and moved on to the documentation. Did some final tweaks to the code to get everything looking a bit more professional.

A future improvement came to mind as soon as I started the documentation: It would be a great idea to allow users to tweak certain selected global variables of the program using a config file. This system is already in place with a config.txt file that modifies marker information and such, but I plan to update it to use XML which is far more flexible and I hope I will eventually be able to extend the XML config file to allow users to automatically add new objects by simply adding relevant information to a central XML file.

However for now, to save time (time really is flying - only two weeks or so of work left!) I shall simply list this as a future development plan in the documentation.

Started it off, stored it in my Google Drive: https://docs.google.com/file/d/0B-L96HHcA2JTQ2J2SDJTaUV0clE/edit?usp=sharing
(anyone with the link can view and comment on the documentation)

Email and stuff

Learnt that IMAP on Outlook is currently down, which is why I couldn't log in previously. Hopefully it'll be fixed soon, and then I can finish off Steve's jobs.

Wednesday, 28 August 2013

Day 52: IMAP and Outlook

Microsoft, the future of email

So the mbox work was successful, but unfortunately before the tutorial can be completed we need to actually set up Thunderbird to use the new Outlook 365 mail system. Steve did mention I document doing this as another of my tasks, but I completely forgot - so I set about doing it today. It proved impossible.

Now so far I have been logging into the new Outlook mail system directly via the student portal. This involved logging into the portal, and then clicking the "EMAIL" button, which took me directly to my Outlook 365 inbox. There was no actual login screen for Outlook itself.

So that's all well and good, but when I tried to access the outlook web app directly and log in, no combination of username or password would work. The point of the new change, as Steve explained, is so that everything will use a single username and password. And that would be the Central University one, i.e. the 8 character ID every member of the University has, starting "mba_____" or something similar and it's corresponding password.

Well the Outlook web app login page does not like firstname.lastname@student.manchester.ac.uk, nor does it like the 8_digit_central_ID@student.manchester.ac.uk. There is currently no way to access that mail account other than logging into the Student Portal (i.e. MyManchester) and going via there.

The problem with this is that I do not have a username or password to give to Thunderbird to have it fetch my Outlook mail. From within the Outlook 365 web app, you can view the IMAP settings (server hostname, port, SSL etc). It also tells me that my ID is muhammad.mahmud@student.manchester.ac.uk. Now within Thunderbird I have set up the IMAP and SMTP settings correctly (it tells me all as well and even told me that it found port 23 on the SMTP to be a good 'un so that seems to be working), but when it attempts to verify username and password it simply fails.

So in a nutshell, I don't know my Outlook 365 username and password!

For one thing, I understand that the University does not want students to be able to change their Outlook password from within Outlook 365 itself at any time, because then that won't sync with their university password which will cause all sorts of problems. But I need some way of knowing my Outlook login credentials so that I can bypass the University Student Portal and get my mail forwarded to Thunderbird.

Another interesting occurrence, student portal now no longer logs me in to my Outlook account because I've entered the password incorrectly too many times - now it comes up with the captcha screen (input these characters to prove you aren't a robot) where it also requires me to type my password again. But I don't know my password, because Student Portal was handling all that automatically! So there is no way for me to get into that account until Outlook resets itself. Actually I can just use another browser, but the point remains!

AR
Fixed a few bugs, now sound is correctly re-enabling itself when switching back to a model set from another model set whilst the marker card is still in view throughout. I think the sound system is pretty much done! Only thing to do now is to find more sounds and add more sounds. Working on the bouncing golf balls, intend to program them so that the balls make the bounce sound when any ball bounces. Should be realistic and fun!

Day 51: Not a Monday

Admin

First things first, yesterday being a bank holiday has resulted in my precisely tuned day numbering scheme being thrown utterly off balance. As I did not work yesterday, it can barely count as Day 51. So to clarify, today (Day 51) is a Tuesday, and now Friday will land on Day 54 (rather than having 51 being a Monday and 55 a Friday as would usually occur).

IMAP, mbox files, Thunderbird

Worked out how to get Thunderbird to read mbox files (which was one of my Steve Pettifer missions) without being connected to any IMAP account. Documenting the process into a google doc: How to import old mbox mail.

AR

Finished coding a first draft of the sound system and it works! The dude walking, ship through fire, clock, buggy and heads all have lovely sound effects playing when the cards are visible. After fine tuning the code a bit more, the effects now successfully disappear when the cards are removed so all looking good.

Tuesday, 27 August 2013

Day 50: Resounding progress

The sounds I have need to be split into the background sounds, and the sound effects. Both have different characteristics and logically should be implemented in different ways.

Background music and such can be imported directly as MP3 or WMA, and then played, paused, stopped etc.

Sound effects should be handled by creating an XACT project from WAVs, and then importing the XACT project into the AR program. 3D positioning and attenuation can be applied to the effects as well as looping counts (up to infinite) and they are instantiated as Cue objects and then played at the appropriate times.

I've decided to classify the sounds into the two groups based on their usage. If a sound needs to have 3D effects (including attenuation), or is intended to be played as an effect (i.e. played once and forgotten) then it's an effect. Otherwise it's background music.
Background Music:
guitar music
breathe through gasmask
(various insect sounds)
crackle fire
flamethrower
lear jet engine panning
ticking clock
swam buggy revving
spooky wind

Effects:
golf ball in sand
gravel crunching

Finished the code regarding sound effects for when the heads are being shown. It now plays eerie background sounds whenever the current model set is a head set, and correctly stops all other sound effects regardless of what marker cards are showing (e.g. boots walking, fire sounds etc from other models), starting them off again when we switch the current model set back to a non-head model set.

This was made efficient by putting the code to enable or disable a model (non-head) sound effect into one method that takes a variable specifying whether the sound should be enabled or disabled, declared thusly:
private void processFoundModelAudio(int foundModelIndex, bool Enable)

And something like this deactivates all current sounds with a bang:
for (int i = 0; i < MarkerNodes.Length; i++)
    processFoundModelAudio(i, false);

perform_bang();

(Um the bang bit is a joke)
Okay all looking good. Adding some simple code to play correct effects for the non-head models now.

The email address guinea pig thing is underway, my account is done I'm now on the Office 365 system and have some lovely tasks involving documenting how to set up email on Linux and Windows machines, complete with screenshots, videos if necessary and all the general stuff. Should be fun.

Friday, 23 August 2013

Day 49: A sound call

The happenings @ Kilburn

I realised that the plan described yesterday (for managing sound) isn't the best solution, because in reality we want to do audio processing when either a. something comes into view after not being in the view and b. something goes out of the view after being visible in it. Otherwise no changes are made, any audio playing just keeps playing, and vice versa. The method yesterday, actually calls the audio processing script for a marker on any frame during which that marker is seen. It doesn't know if a marker that was in the view in the previous frame has since moved out (i.e. the b. case above).

So here's a better system:
            // Check if any model has just appeared or has just disappeared.
            for (int i = 0; i < MarkerNodes.Length; i++)
                if (MarkerNodes[i].Enabled && (
                        // Marker has just left the view
                        (WasFound[i] && !MarkerNodes[i].MarkerFound)
                        // Marker has just come into the view
                        || (!WasFound[i] && MarkerNodes[i].MarkerFound)))
                {
                    // If it's just come into view then enable sound, otherwise disable sound
                    processFoundModelAudio(i, MarkerNodes[i].MarkerFound); // second parameter specifies enable or disable
                    WasFound[i] = MarkerNodes[i].MarkerFound;   // update for next time
                }

And then we have the switch statements in the processFoundModelAudio method to determine exactly which model it is. Lovely! Continued coding this stuff into the AR demo.

Georgiana finished off the Pi's, all 278 of them today. Turned out we are short on 48 or so power supplies, we've seemingly lost track of a box or two! The rest of the Pi's are all packed and ready to go.

Also noted that we were horribly short on card readers, only had about 80 of them. So Steve decided since the school lab PC's will all come with card readers anyway, we won't give the students any card readers with their Pi's. Eventually I had to go over to the AIG lab and remove all the existing card readers from the bags... a gruelling display of manual labour indeed.

Also managed to lock myself out of my CS account. Again. Password wasn't working! Similar thing happened about a month ago when I was uploading pics for Rina and needed to use my CS credentials, noticed I couldn't log in. Whatever the problem was that time, Tony Curran walked me through resetting my password and all was well. He also mentioned (rightly so) that I really should have gone through the ARS system rather than just turn up at his door with an email from Rina as the only announcement (he really is a busy chap what with managing the IT of the whole building/school/whatever) - so this time I smugly submitted an ARS request... and then completely missed Tony who came looking for me when I was out for a late lunch! His office was locked afterwards so looks like this will have to wait till tomorrow.

Pi's lined up in battle formation. Katy doesn't stand a chance.


Wednesday, 21 August 2013

Day 48: New environments

Relocation

Shifted Donald over to my original office next to Toby's because I frankly half killed myself the other day by working in the AIG lab where the temperature must have been around 18 degrees. The fact that the lab is cooler than other places in the building is great when it's a hot summer's day, but with the temperature dropping off a bit lately it was too much to handle.

Walter's tower is still in the lab, his screen is with Georgiana to connect to the master Pi so she can run the Pi labelling (which is going very well indeed, almost finished I believe! All 250 Pi's) and so is the ethernet cable. Which means I am back down to a single PC, a humbling prospect but nevertheless when times are harsh we must make do with what we have.

The card readers ran out so the Pi labelling was being inconveniently held up, have asked Steve to investigate.

AR

Made a little list of the models assigned to each MarkerNode. The first model refers to the model shown in ModelSet1, the second one refers to ModelSet2 (i.e. when 1 or 2 is pressed on the keyboard):
             * 0 Sofa, Ant
             * 1 Marvin, Police
             * 2 Gingerbread, Diskette
             * 3 Dude, Bird
             * 4 Mosi, Turbosonic
             * 5 Dragon, Creeper
             * 6 Osaka, Buggy
             * 7 Fire, Golf
             * 8 Guitar, Basketball
             * 9 Clock, Globe

Using this information, and setting a good value for MaxDropouts, I can determine exactly when each model is showing using the isFound method and checking the active Model Set. Then it's simply a matter of playing the right sound at the right time.

I plan to use a for loop to determine whether models with sound are up or not. Then pass them to a method which uses case statements to do the rest:

Loop through all models, if its found then call method processFoundModelAudio:

method processFoundModelAudio takes integer ModelFoundID
if modelset 1 active, perform case statement 1 else perform case statement 2 else heads are visible so process them.
Case statement 1 and 2 switch on ModelFoundID, has cases 0 - 9 for each model with processing info for each.

To play sound cues in xact project use: http://rbwhitaker.wikidot.com/3d-audio-effects-location
To play background music: http://rbwhitaker.wikidot.com/playing-background-music
Attenuation effects: http://rbwhitaker.wikidot.com/3d-audio-effects-attenuation

Sounds great!

Day 47: Marks, get set...

GO!

I passed the Pi's and their script off to Georgiana, who proceeded to rampage through the first 15 of them in 45 minutes (3 minutes a Pi is about right, it takes about that long for one of them to boot up and have it's MAC registered).

So today we just want to time and get a feel for how many a single person can get through in how long, so that we know if I need to also join in the labelling before the end of the week (or if we can just leave Georgiana to toil alone).

Email testing

I'm also the proud guinea pig (and canary) of the new email infrastructure. Basically Tony from IT Services is going to switch my address over to the new system as a test case, and I report a few things to him and then make a tutorial about how to set up the email system on Windows and Linux and even on my Android smartphone just because I can. Said tutorial will include lots of screenshots.

QandA to Wiki

In previous years the CS QandA site has not really been very popular, even though it is actually a great place to communicate with the staff and other students collaboratively and solve common problems. So this year it has been decided that there will be a collaborative staff and student wiki, with the hope that this tech will garner increased popularity and have more success than the original QA site. Another one of my tasks is to go through the QA site and check for any substantive or useful questions that could be used to seed the new Wiki. Steve Pettifer asked me to go through for an hour and just copy any good discussions into a google doc so that he can determine if porting such questions onto the Wiki is actually a good idea or not.

Here's the doc:
https://docs.google.com/document/d/14Kf5__sG9qurGWexaakkGVYIuToDacSBLencf09j6TE/edit?usp=sharing

AR

Carried on some more general code fixes and mass commenting!

Monday, 19 August 2013

Day 46: Bugs

Donald fixed, the future of debugging

Got ethernet on Donald working again, and went on a wild goose chase trying to find the right drivers for this annoying unmarked WiFi card. Finally found them, WiFi works, all good to go.

Back to AR, upon every model acquisition there was some code that read in the marker nodes referenced by the config.txt file and assigned them to the model. Previously this code had been repeated at the start of every demo add method which was the cause of a lot of redundancy. I moved all the code into one place a single array that iterated and did everything in a choice few non-repeated lines. It is all done during the marker tracking initialization method. Unfortunately since doing this, none of the models are showing up on the markers when the demo is run. So today I set about fixing it.

This is a great example of how easy it is to debug when you have two machines that can run the same code. I had the older working code debugging on Walter and the new broken code debugging on Donald alongside. It allowed me to compare variables directly, observe what differences in execution my code changes made, and various other stuff. I'm somewhat surprised that debuggers do not offer this functionality actually (or maybe they do but I just haven't found it), of debugging through two code versions simultaneously. Was immensely useful to me!

So first off for the new working code on Walter, in debug mode I checked out the complete MarkerNode's that were being added to the scene graph after the code completed. I then checked the same variables on Donald debugging the non working code, comparing them to check for differences.

And the answer was immediately clear, my new code was resulting in two variables remaining null: MarkerNode.markerConfigs and MarkerNode.MarkerID

Whereas with the working version these variables were set as follows:
markerConfigs = { [object 2] }
MarkerID = "AlvarConfig6.txt"

Or if the marker node was a single marker and not a marker array then:
markerConfigs = { [object 1] }
MarkerID = 86

And after some step-through action, I discovered that in the new code my scene.MarkerTracker was null whilst the markers were being assigned to it. Oops. A quick rearrangement of code, and all now works perfectly.

Rearranged and organised markers in a sensible order, adding them to the arrays in a natural manner.

Fixed some bugs, for some reason ship flying through fire demo wasn't showing, I realised I had a firePlacer that was being disabled at odd times not too sure why. Removed the code now it works.


Continuing coding sound. Going well.

Day 45: Ssh vs read-only, finished maclog.py, back to AR

The final touches: safe shut down

So there were two possible solutions, either use ssh or make the SD read-only. You can ssh into the Pi via python using Paramiko (http://jessenoller.com/2009/02/05/ssh-programming-with-paramiko-completely-different/) and then call "sudo halt" to shut it down, but instead I chose to go the other way and modify the Raspbian distribution on the Pi, forcing it to mount as read only. I had to make changes to a variety of files, as described here: http://www.raspberrypi.org/phpBB3/viewtopic.php?f=29&t=22596

Testing

I tested it in a variety of situations to hopefully ensure my script is robust enough to withstand all manner of unexpected abuse. Added lots of text printing to walk the user through every single step, error catching to deal with annoying issues. Added a keyboardinterrupt catch so that it would catch a Ctrl-C to exit the program, and then closes the log file and manually shuts down properly.

I created master and slave SD card images (with the Unix dd command) as the most foolproof backup I could think of. Each image is just under 500MB

All finished version 1 description:

There is a Master Pi with the script and log file on it. Connect it to a screen and keyboard and run the script. Follow onscreen instructions to work out how to use the script, very self explanatory
Year is passed as command line argument
It looks for log file at startup, and asks to create one if not found.
If it is found it starts the sequence at the correct number for this year.

Any new mac address discovered that is not already in the log file (regardless of year) will be added to the end of the file on a new line, along with its sequence number as such:
b8:27:eb:6a:ca:84,13/0002
(MAC,yy/nnnn)
This will also be printed to the screen so the user can write the sequence on the Pi as required.

If the new mac has already been previously logged (regardless of year) then error message will print existing sequence, and script will continue (waiting for next mac) without relogging.

It takes a pretty long time after a new Pi has been connected for it to be recognized by the script. In all situations meaningful messages walk the user through what is happening.

There is a specially set up SD Card with IP address 192.268.0.2 and which will mount its OS as read-only. This SD card will be put into all of the ‘Pi’s to be labelled’ one by one as they are connected to the Master Pi with a direct ethernet connection.

When connecting new slave Pi, always connect it’s power cable last. For obvious reasons. When disconnecting can do any order (a socket error occurs if you disconnect the network cable before the power cable for some reason, but my awesome script catches that, deals with it safely and continues)

Pi labs

Also read through the bunch of other labs at Grahams request, just to ensure everything is up to scratch. They are all located here: http://studentnet.cs.manchester.ac.uk/ugt/2013/COMP10120/labscripts/

AR

And back to AR! Donald no longer connects to the internet ever since I started messing about with a bridge connection on the Ubuntu VM that I installed - or it could be some other reason I'm not sure. In any case the ethernet connection no longer works and I don't know which drivers will make his wifi card work... it doesn't even have a brand on it! I'd better figure this out or the next event when Toby tries to use Donald in some random location there will be an unpleasant surprise in store as he discovers the WiFi doesn't exist...

Next Monday I believe the Pi's will be passed to Georgiana who will go through the actual labelling process and running the script etc

Friday, 16 August 2013

Day 44: I once was a Python programmer..

Python

When they said Python was easy to learn, they weren't joking! I haven't actually programmed very much in Python before, so I was learning quite a bit as I progressed through writing the script. It is in fact a very quick to program and pretty easy to use language, with enough power to do wonderous things. Useful tool in my programmer's arsenal.

So today was spent just writing basically the whole script, encompassing all of the functionality intended apart from the last issue: ensuring the slave Pi's can be disconnected without corrupting the OS. This can be done by either making a non-writable SD or alternatively logging into the Pi and shutting it down via ssh. It is not enough to simply flip the read only switch on the SD card, the Pi will still mount itself with RW access. I'll fix this first thing tomorrow!

Also read through the Pi lab at http://studentnet.cs.manchester.ac.uk/ugt/2013/COMP10120/labscripts/rpi1.pdf and suggested a few improvements to Graham. Mainly I thought there was a bit too much of a "wall-of-text" effect and not enough enforced structure. Which makes it somewhat difficult to follow instructions as they are embedded as any number of sentences within paragraphs; and makes it somewhat easy to lose your way as you flip from the document to your Pi. An improvement would be to enforce some structure with bullet points, or numbering instructions to make them easy to follow and provide anchors in all the text so you can find your place easily.

Thursday, 15 August 2013

Day 43: ARP'ing about

Pi

Got the Pi problem on the move with Steve Pettifer's help. First we tried to arp-scan a Pi via a Ubuntu VM on a Windows Host, but the idea of trying to navigate the virtual network adapters to route the VM to the Pi was simply daunting and somewhat unnecessary.

So I used the Mac conveniently located behind me in the AIG lab instead, as Steve had previously successfully arp'ed a Pi from his Mac so we should know what we are doing. However even following Steve's tried and tested method did not work, the arp-scan still failing. Apparently the OS on this Mac treats internet connection sharing differently to the one on Steve's Mac.

So then I connected two Pi's together, one a master and the other a slave (i.e. it represents all the other Pi's that are going to be connected one at a time). Installed arp-scan on the master, tried it, but still no luck.

Finally we got David in to help out, and soon discovered that getting arp-scan to work meant we had to fully configure IP addresses for the master and slave Pi's. Luckily we are using the same SD card for all the slaves so we won't need to reconfigure anything each time. There isn't any DHCP device set up, so it is simpler to just statically assign IP addresses. This was achieved my modifying the /etc/network/interfaces file, replacing the line (on the master Pi):
iface eth0 inet dhcp

with:
iface eth0 inet static
    address 192.168.0.1
    netmask 255.255.255.0
    gateway 192.168.0.2

and similarly on the slave Pi (though of course the address and gateway will be different).

Now finally with a working arp, I set about coding the Python. Installed scapy (libraries for arp'ing and other useful network related stuff) and it's dependencies as per the website instructions.

In the Python script I noticed you need to first import scapy.all and then call from scapy.all import ... For some reason it doesn't work if you try to import the required bits directly.

Decided on this design specification for the function:

It will check for any unique new Pi being connected to the Master Pi via direct ethernet (Master and the new Pi must be using specifically modified SD cards that assign them the correct IP address) and whenever it finds one it will log it's MAC address and a newly generated sequence number to the log file. They will be on the same line, seperated by a comma e.g.
b7:56:d3:ff:32:11,13/0001

It will print useful info to the user as it runs, e.g. MAC and sequence for any new Pi, or a message saying that the Pi currently connected has already been logged. In addition it will support being stopped and restarted but still remembering which sequence number it was up to. In additional addition it will require a command line argument which is the current year, so that it can output the correct sequence number.

Sounds fun.

Wednesday, 14 August 2013

Day 42: Raspberry Pi MAC addresses and stuff

Currently stuck with LowryDemo. Moved all the repeated code that was assigning models to marker nodes into one for loop in the setupMarkerTracking method instead. To be honest that's where the code belongs, so I'm saving a lot of repeated code. However after doing this, now no models show up on markers when the code runs.

After further debugging, it seems no markers are being tagged as found, even when I hold them up to the camera. Will study the new code more closely and see if I can work out what the issue is.

Pi labelling

Graham contacted me with a little job involving uniquely identifying and labelling the new Raspberry Pi's with sequence numbers before they are distributed to the first year students. Basically I need to write a script that will run on a standalone non-networked laptop that will monitor for Pi's being connected, and when one is found connected it will then log the MAC address with the appropriate sequence number.

This sequence number will then be written on the Pi's case with a permanent marker pen.

So step one, find out how computers are being used to connect to Pi's and what writing a script to do that would involve!

Monday, 12 August 2013

Day 41: Back in business

Back

The holiday was good! Arrived back in Kilburn to carry on the AR work, and leapt promptly back into the coding of sound.

Found out how to check markers to see if they are in the view or not, and also found out how to set a buffer number of frames. If the marker has been seen within the last x frames, then if it is lost for whatever reason (the marker moves out of the view, or it blurs so the camera doesn't capture it for a few frames, etc) it will still remain found until the buffer expires or the marker is seen again.

Useful so we don't have choppy audio. Am coding it in now.

Tasks to focus on in the near future

Apart from the implementing sound that I mentioned above, there are a fair few loose ends to tie up in order to get the AR up to scratch:

  1. Documenting entire new system
  2. Test AR installation on a new system - Hoops, to be precise. Ensure portability of AR.
  3. Add handling to models that don't load, user can see error message, attempt reloads, etc
  4. Make a system for people to easily add models? GUI?
  5. more that I can't remember right now
So there is quite a bit of stuff to be working on.

This was Toby's last day before flying out to Japan for his holiday, so there was a tearful goodbye. He'll be back in a few weeks though, and in the meantime I've been asked to help out with general programming tasks for the school with Graham Gough, John Latham and Steve Pettifer, so looking forward to some new stuff too.

Friday, 2 August 2013

Day 40: Sound.

ExportJPEGs bug

Fixed a bug in ExportJPEGs, where the FileSystemWatcher would call the "fbx found" event handler multiple times for the same fbx models in the incoming folder. Realised it's because of the regex replacement I do on every fbx found, this triggers the FileSystemWatcher as I've set the FileSystemWatcher to check for changes in files.

Unfortunately the reason I cannot just simply stop the Watcher from detecting file changes is that I am using a side effect of file changes to detect files. I have a touch method for when we want to load existing models in the monitored directory, that changes the creation time of every file - thereby forcing the files to be noticed by the Watcher.

You need to do this to load existing files on startup because there is no way to get the Watcher to see files that are already there.

So now I've edited my touch method, instead of resetting creation time on every file and having the Watcher detect a file change, it now renames every file in the folder and has the Watcher detecting renames. A complicated bit of regex now renames any filename to start with a full stop. If the files already start with a full stop, it replaces the full stop with a comma. In this way the user can also tell which files have been loaded because they pre existed, as all those files names will start with a comma or a full stop.

VMs and AR

Following on from yesterday when I was determining how to move the AR to a fresh machine so that it can run the demos successfully.

Okay, worked out what the issue was on the Virtual Machine and why it wouldn't load the project files: apparently you need to install XNA Game Studio in order to load .contentproj files. Installed the redistributable framework package and then tried to run ExportJPEGs.exe, and it went rather well.

Got up to the point where it told me it couldn't find a Direct3D device. It seems that VirtualBox's virtual graphics adapter isn't good enough to run our AR programs (which are tagged as HiDef Direct3D apps). I don't see this working in a VM any further than this, so the next step will be testing it on a real machine. Katy Donald and Walter are all fully prepped and installed for Visual Studio XNA development, so I can't use them, which leaves Hoops - which is currently in use by Georgiana. So in that case I think I will use Walter or Donald.. just uninstall everything related to AR off one of them and then continue from there.

In any case, the purpose of this exercise was in order to determine the minimum software installation we require in order to run the AR applications, so it is not entirely essential. I already know that we can definitely get the AR to work by moving about 1GB of source code and models, and installing a bunch of dev packages on the target machine. And now via the testing up to this point (i.e. up to until the Virtual Machine failed me), it seems the AR could work with a bare minimum which includes the following:

  1. 100MB binary files and compiled models etc, all in one folder
  2. .Net 4.5 runtime installer
  3. XNA 4.0 runtime
  4. VC redistributable package
If that is indeed all we need then we're in pretty good shape, it's pretty portable stuff.

Sync errors

Tested ExportJPEGs today, and noted that some models were failing to build for some reason. After further inspection I found that if I uploaded a whole bunch of files at once to the web server, they downloaded somewhat slowly back into the incoming directory. This meant that in some rare situations, the model would be present in the incoming model folder but not the texture. So it would fail to build because the texture can't be found.

To fix it, I added a test to check for the texture, and if its not there wait 60 seconds. If it's still not there, print error message, abandon loading that model and carry on with the program. The GUI tells the user which model is being loaded, it displays a live countdown when a file cannot be found, and displays the path of the texture it's looking for and which model it belongs to as the count occurs so if the user knows what they are doing they can attempt to fix it.

A further improvement could be to add models that could not load to a section on the GUI, and allow the user to click on them, to attempt a reload on them, and to view error messages and such about why they could not load. I will implement this later on as time permits.

To be honest this whole thing wasn't exactly a simple task and required some devastatingly interesting regex usage, for example to extract the name of the texture from inside the model file itself:
String texPath = Regex.Match(File.ReadAllText(e.FullPath), @"[rR]elative[fF]ile[nN]ame: ""(.+)""").Groups[1].Value;
And then I appended that to the model path to get the texture path in full... and other interesting nuances :)

But enough of that, moving on! On to a concept that has long awaited its time to shine. And well, now that time has finally arrived. Yes, I'm talking about..
..
s'what I'm talking about


Sound

Found some very nice sound samples online for various effects, and set about collecting them.

The latest version of XNA can handle mp3, wma, and wav files directly, and has some very useful tools for 3D sound; plus allows full control over playback (looping, pausing, stopping, playing, continuing, etc).

I ran into a roadblock very soon when trying to implement it though, Goblin XNA does not provide any way of telling the user when a marker card has been found! The issue is because I want to only play certain sounds when certain demos (i.e. certain markers) are in the view - and those sounds should stop when those demos are removed from the view. Now the only way to do this is to manually check for every single marker card in the Update() function and see if its visible! And add my own code to implement a buffer time for when a marker card is considered in view and when its considered out of the view - because the camera does lose track of markers for miniscule (and longer) periods of time when the cards are moved or when the marker is shortly obscured by light or something. We don't want stuttering audio starting and stopping at every movement of the card!

Trying to get around this now, but the current code is looking uglier by the second! Am studying the Goblin source code to determine if there are any obscure methods I can use to make this more intuitive.

Aaaaaaaand I'm taking the next week off on holiday (surprise!) for religious observances (Eid and Ramadan) plus general relaxation, so I'll be getting back to this stuff a week on Monday all being well. Planning to get started on my MEng report too after the time off so more stuff to look forward to (cough).

And in other news Toby just confirmed my 3 week extensions for this vacation placement so I get to spend even longer partaking in more furious coding fun! Great news, got to admit this vacation has been super productive, really enjoyed myself and learnt an absolute ton.

Day 39: Documentation, Installing AR on a new machine

AR

Finished off the instructions for the 123D Machine operator.

Also installed a new Virtual Machine on Donald in order to test what software is required to install the AR on a new machine. We may want to run LowryDemo or ExportJPEGs on a new machine (other than Donald/Walter/Katy) in future, and therefore would need to know exactly what is required to get them to run. Ideally you wouldn't want to install the full Visual C# Express 2010 developer package!

All you need to run the programs on a PC with everything (i.e. all the Visual Studio and XNA dev stuff) installed is the bin folder, the data folder (not models or textures though, just shaders and fonts), and the dlls folder.

I copied these to the new clean virtual machine, and started testing.

First run of ExportJPEGs: force close, requires .net 4.5
Installed .net 4.5

Second run: force close no meaningful error
Installed Visual Studio 64bit redistributable package

Third run: force close no meaningful error
Installed VS 2010 C# express, and Service Pack 1. Intend to open the project in VS and run a debug to see why it isn't working, then we can determine which runtimes we need.

Can't open project with VS 2010, apparently the contentproj file is not supported by this version of VS.
Now I'm running through hundreds of Windows Updates to update VS to the latest version.

Sound

Toby refreshed the idea of incorporating more sound into the AR - because at the moment it's a bit lifeless. I was looking into it previously, but it got overshadowed first by porting the code to XNA 4.0, and then after that by Animation13, and then straight after that I jumped into automation and working on sound always seemed to get pushed aside. But no longer!

So, some interactive sound would definitely liven up the program. For example boots on gravel when the man walks about, crackling fire and a spaceship engine sound with the ship flying through the loop demo. Horrible creepy insect sounds when the ant shows up. Strange eery music when we get the heads up?

The new sound features in the new Goblin 4.1 should make this process far more fun than it was in the old version.

Wednesday, 31 July 2013

Day 38: OpenGL bugs, new PSU, 123d-AR plan, documentation

Plan of the 123D-AR demo

How it's all going to work! Used OneNote to splash out the basic draft of how everything works and sent it to Toby for review. It's a little muddled but hopefully sensical enough.

123D

Saw this 123D capture from Animation13, which interestingly managed to grab the television screen:
Good one
Interesting. Anyway..

I found you can actually upload multiple 123D captures simultaneously, just run multiple instances of the program and they all work independently. This will save a lot of time and allow us to get through many more 123D -> AR capture processes at events.

You just need to remember to make sure you are signed in on all of the 123D programs running simultaneously. If you aren't signed in you can't export models.

OpenGL 3.3

Helped Georgiana fix a bug issue with OpenGL 3 today, she's been porting the older OpenGL 1 lab code to the newer version to make it a little more up to date. Previously, the code to draw primitives (like a circle) would take points sequentially so you could have a loop that generated the next point to draw on every iteration and passed it to the function. The new version of OpenGL however, takes an array of complete points at once (and as per standard C, you tell it how many points from the array to wish it to draw).

Attempts to pass points into the new functions by iterating a loop was the root of the problem, and was soon fixed. Now we get correctly drawn circles and primitives! The change means we need to make an array to hold all the points before they are passed to the drawing function, which requires an extra variable in memory.

Donald's new PSU

... is installed and running very nicely indeed. Astoundingly quiet and reliable device, a BeQuiet! L8 600W. Should outlast the rest of the system by a good few years I hope.

Documentation

Started making the user manual for the AR. Will have a sheet with instructions for the 123D operator, and another one with instructions for the AR operator, which include instructions, FAQs,  general guidelines and whatnot.

Will also make documentation for how to reinstall AR - 123D on  new machines.

Tuesday, 30 July 2013

Day 37: Polishing ExportJPEGs

Added lots of exception handling to ExportJPEGs, now it will rarely crash from I/O file and directory issues, but rather print a hopefully useful error message (see below) and continue provided the problem isn't fatal.

Added a bunch of methods for handling the red label text and using it to show errors and messages when the program is running:
setLabel(String text)
setPersistentLabel(String text, int time) <- time is how long the label persists
setPersistentLabel(String text) <- time defaults to 5 seconds if not present
renderLabel()

This affects the large red text at the top of the screen, which prints useful messages and prompts when necessary. Via these methods you can set the text to anything you like, normally once it is set to something the label stays like that until something else sets it to something different. Now you can set a persistent label where it will not allow anything to change it until a certain time has passed (except another persistent label call). This is useful for e.g. printing error messages that you want to make absolutely sure will stay on the screen for a number of seconds.

Also the render label method splits long text over multiple lines and renders it. Previously if the label was too long it simply went off the end of the screen, now you can actually see the whole message.

Also added functionality to load all existing models at startup. If when the program starts up, the incoming folder has any models in it then the program will prompt the user whether to load existing models or ignore them. The user can then choose to load the existing models or ignore them. It's all done quite nicely and intuitively in the GUI. So now if there are a bunch of heads we want to show up from the get go, or if the program shuts down and has to restart, you can safely get all the heads back as long as you have the original models. On that train of thought, I also set ExportJPEGs to clean out the built heads folder on startup. So if heads had previously been built before starting the program now, they won't get loaded up again. This prevents LowryDemo from auto-loading old heads and from wasting storage space. Hmm actually I think I can improve this.. Okay we'll see how this goes.

Now making a separate background thread to check for files on the server every so often, this way when downloading files it won't freeze the GUI.

Also set the upload gif part of the program to always use unique gif names. In this way no gifs can possibly be overwritten.

The way the server file sync thing will work is that the push script on the 123D machine will have an exact representation of the server folder. So anything not in the folder in the push (outgoing) directory will be deleted off the server. In this way the 123D people have full control over the models that are on the server and can replace them. If a model is replaced, the AR/ExportJPEGs program will notice and build it again (it won't replace it's old model on the list though, once a model is built it's there for good). At the end of the day, if they want to clean up the server folder just empty the outgoing directory and wait for the script to update (which will probably be set to do so once a minute).

Importantly, the 123D people MUST NOT EDIT OR RENAME TEXTURES AFTER THEY HAVE BEEN EXPORTED from 123D. Just export and send them as is, if any texture is renamed after being exported then it won't work. You can rename models though but I strongly suggest just exporting and sending as is. It's best the 123D people export directly into the outgoing folder.


Anyway, all these details plus anything important will be packed into a little user manual shortly, for the future operators of the wondrous 123D-AR demo.

Monday, 29 July 2013

Day 36: Exceptions, LowryDemo, etc

No more Cygwin

I learnt some fantastic stuff today - I can remove the need for cygwin from the AR completely and make it a fully native and portable windows executable program. Well you may need to install some visual c++ redistributables, and I'm not sure if it will work without XNA installed... but it's pretty standalone now! I have a few required .exe's that need to go around with the program, but that's basically it.

First off a big thank you to the University of Manchester for making certain file transfer protocols unusable over wifi. If it wasn't for the fact that lftp (and ftp and rsync and everything else I painstakingly tried and tested) didn't connect over wifi I would never have even looked for alternatives and learnt all the wonderful gadgetry I learnt today.

So first off, I learnt that WinSCP not only has a synchronize feature to rival lftp mirror, it also comes as a .NET assembly and a portable executable - and it works over the University wifi to boot.

Now, the fact that I can use WinSCP from inside the ExportJPEGs program (remember the .NET assembly is available) means that I don't need to use any cygwin scp or ssh or lftp or ftp or any of that gubbins. Which also means that now the only reason I have cygwin is to use the ported image magick convert tool. After a little research I found that image magick also offer a portable version that works on Windows, which comes with a little convert.exe that, after testing, works exactly as intended. Which means I suddenly have no use for cygwin at all.

Well one thing's for sure, I've completely given up on making this program portable. That's probably a good thing because I don't fancy working out how to manually port all of Goblin's code onto Linux and Mac (XNA has been ported to Mac via MonoGame and MonoDevelop, but no one has bothered with the Goblin wrappers). So from here on out, the University of Manchester AR and 123D wonder is a completely Windows based offering.

In other news I managed to destroy LowryDemo because I messed up with using the git repository and ended up with ridicuous "<<<<<<<< HEAD" lines in random files that prevented the whole thing from loading or building. It didn't even tell me which files had been victimised, so the only solution was to just revert to my backed up copy - luckily I have a few. Then I had to redo all the content optimizations I had made, moving textures and models to a central folder, setting relative pathnames for most of the models (converting them to binary first), etc.

First automated draft complete

That's right. I finished implementing the automated loader code in LowryDemo. Now it automatically loads up every single precompiled head in the heads directory, then waits for new ones. As ExportJPEGs compiles new heads, it automatically loads and adds them to the latest free model set.

It's all working!

At one point I got totally lost because WinSCP doesn't copy files the natural way. In fact it makes a placer file, then renames the file once the copying is complete. While this is a great idea, it meant that my file watcher (which watches for newly created .fbx files) completely missed the renames until I told it to watch for renamed .fbx files as well. A simple problem but it had me lost for an hour or two in despair thinking all was broken.

Tomorrow I improve it, add exception handling, etcetera.

Friday, 26 July 2013

Day 35: Donald's PSU

Doing what I do best, troubleshooting faulty PC's

Donald wouldn't start up today. Power comes through, everything starts whirring, but nothing comes on screen. At first I thought it was the fact that I hadn't used all 9 motherboard standoffs when installing the motherboard, and now some parts were shorting out. So I removed the mobo, stole some standoffs from an old Pentium IV PC nearby (I'm sure no one is going to use it...) and reinstalled the motherboard with standoffs all in place. But it still didn't work.

I checked RAM, checked drives, checked graphics card, and noticed that the red CPU led was lighting up and staying lit. Normally it would go off when the mobo gets past the CPU test. These little led's are a wonderful feature of this Asus M5A97 motherboard, allowing the user to instantly know where the board is failing as the PC boots up. There are led's for CPU, RAM, VGA (graphics card), and boot device.

Anyhow the CPU was either broken or not getting enough power. I checked the power cables to the CPU and to the motherboard but still didn't work.

I then removed the entire motherboard and removed absolutely everything except the RAM, the CPU, and two power cables from the PSU - the 8 pin CPU power and the 24pin ATX motherboard power. Didn't work, same problem with CPU led lighting up.

I then opened Walter and used the 8pin and 24pin cables from his PSU, and boom it all worked!

So we ordered a new BeQuiet! Purepower L8 600W (only one of the best most reliable PSU's you can buy) from Scan which should be here soon.

Getting AR up on Walter

Moved the development to Walter in the meantime, luckily all the code was uploaded to our BitBucket version control system so I had the latest code ready to go.

To install the AR on a new PC it involves installing all the component software like Visual Studio, XNA, Alvar, OpenCV etc, and also Cygwin (for the shell scripting part of it).

Cygwin, installed openssh, imagemagick (both), and lftp.

Aaaaaaand it looks like lftp doesn't work over WiFi. Oooooh deary.

Welp, there goes the fantastic lftp mirror command I liked so much!

Finished off the Automated GUI and it works great! Only thing left now is to get Lowry Demo to automatically load new heads, should be a cinch! Oh and also to find a replacement for lftp mirror. It really is a shame that it doesn't work over WiFi, because it is just so ideal for what we want.

And I also need to add exception handling for when a build fails. (VERY IMPORTANT HAMZA DON'T FORGET)

LaRue

You guessed it, a new PC! It's an old Dell T3400 workstation called Atlas50 or something, which for some reason has 8GB's of RAM in it (!). It has a decent dual core CPU and an 80GB hard drive so it isn't ancient. It's a real pain trying to get drivers to install on it though, the darn thing doesn't recognise USB devices (because the chipset drivers hasn't been installed) and I can't connect to the internet to download them (because the ethernet driver hasn't been installed). Gads. Well it shall have to be done on Monday then.

Till Monday!

Thursday, 25 July 2013

Day 34: Walter arrives, MEng report, Vacation Extension and a dual-list GUI

Walter arrives

I now have two PC's on my desk! I do like having two PC's on my desk.

Reinstalled Walter with the oncampus windows 7, he's now activated correctly, finished installing everything up to date and got the AR Demo with version control all sorted out and set up. There are massive amounts of backups on Walter, on Donald, and with a full set taking up 300 or so GB on the 1TB external hard drive.

ExportJPEGs and models

So it looks like having a scrollable list of heads and allowing the user to select between them is going to be a little difficult (i.e. I have no clue how to finish the implementation of ScrollPane and it would waste too much time to try).

So as an alternative I could have one list with space for 10 head names, and then allow the user to press a button to go the next set of 10 names, and the next and so on and so forth. Two buttons in fact, to go back and forward between sets of heads. Then the user can export them as they see fit.

Implemented it, ended up having two adjacent lists on the GUI because that means I can fit 20 heads on one screen instead of just 10 which is great.

Some extremely complicated programming was involved, so I used a bit of good old abstraction to make it all easy on the brain. Basically trying to decide when the forward and back buttons should be enabled and disabled, and working out when to initialize a new list for each of the two lists, and when I can assign an initialized list to one of the visible lists and how to determine which model to load when all I have is a list index that could be from any of the two lists and all this stuff is nigh on impossible. So, I simply made a bunch of wrapper methods around the two lists that make them behave like a single, simple list.

Namely:
makeNewHeadList() - which actually makes a pair of lists but the user doesn't see that.
setHeadList(int whichListToShow) - which shows on the GUI the correct pair of lists, indexed 0, 1, 2 etc, each index referring to a pair of lists.
getModelIndex() - which takes a list selection (i.e. after a user selects an item in one of the lists on the GUI) and returns the index of the corresponding model so the program can load whichever model was clicked.
addModelToList() - takes a model name, and adds it to the latest slot in whichever of the two lists it needs to go.

(remember each list is showing 10 names of models, the user clicking on a name makes that model show up in the GUI)

Here's a little taster of what it's looking like:
ExportJPEGs running - the heads will show up in the list as they are added to the watch folder


Placement Report

Got an email from the MEng manager Nick Filer informing us that we will need to write a full-on 35 page report regarding the projects us MEng students have been doing over this summer - worth 15 credits alone! I'm so glad Toby had be blog everything, because this is just great - I have all the information I need all here in this blog and all I need to do is write it out into a report. I would have loved to do this Augmented Reality stuff as a third year project (not in any way belittling the project I actually did, a SpiNNaker demonstration, because that too was awesome), and now I can finally live the dream with a lovely little report and a seminar all about the work I've been doing.

This is quite probably the first time the prospect of writing a report hasn't been met with unadulterated despair bordering on utter doom - what a nice change :-)

The vacation experience

So, in general, this summer has been absolutely fantastic. I've enjoyed every bit of it, from the copious random tasks to the huge events to the rebuilding some ultra-extreme-cutting-edge PCs to the difficult programming to the combining of Visual C#, .Net, XNA, 123D, shell scripting and various other technologies in order to make one heck of an Augmented Reality program. So much so that Toby is actually pursuing an extension to my work placement! If that goes through it would just be great. I can spend three more weeks doing what I do best and having a great time doing it, and as a result we should see a super polished and refined Augmented Reality experience by the time this summer is done. Count on it.

Wednesday, 24 July 2013

Day 33: Results of hard labour

Stuffage

Today I did the following:
  • got the FileSystemWatcher working in ExportJPEGs (calls an event handler whenever a file is added to the folder)
  • Implemented ExportJPEGs to do everything up to the uploading of the gif, when user clicks export button
  • Implemented ExportJPEGs to automatically run lftp monitor upon starting
  • Implemented ExportJPEGs to fix model and rename regex inside fbx files when file is added to watch directory
  • Implemented ExportJPEGs to automatically build models after the files are ready
Now I basically want a system where new heads that are imported from the webserver show up in the GUI.

So the new files get built automatically and sent to the common folder (that LowryDemo and this ExportJPEGs can access).

ExportJPEGS displays a list of all head files that are ready for loading (the pre-load building is happening automatically, after files are built they are shown in this list).

The user can select a head from the list and click a button to load it, which brings it into the view so they can move it about and export it as a gif as and when required. In this way the user can switch between heads back and forth and re-make-and-upload gifs.

Update: Just found out GoblinXNA has only a half-finished version of a ScrollPane. Its basically disabled, but in the source code you can see a mostly-complete-looking implementation of a ScrollPane with the comment "IMPLEMENTATION NOT YET FINISHED" at the top.

Without a ScrollPane our list won't be able to scroll up or down, so when we have 50 or so heads in they will just overflow the view and run off the end.

Well... I've been having a crack at fixing scroll pane myself. Joy.

Tuesday, 23 July 2013

Day 32: GUIs and exporting, Animation13 web gallery

Batch Photo Processing - sed and imagemagick

Got my CS password reset and now I can successfully get into babbage.

Image magick has a tool called mogrify which is used to batch convert images, but I simply hacked together a shell script or two to get the job done with the renames and everything in tow.

This is the basis:

for infile in *; do
outfile=$(echo $infile | sed -n 's/\./.preview./p');
test -z $outfile && continue;
convert $infile -strip -density 72 -thumbnail 300x300^ -gravity center -extent 300x300 $outfile;
done

Which took all the images called img_xxxx.jpg and made previews according to the required specification. A similar command was run before this to turn the raw images into suitable main images of dpi 72 and resolution 1200 by whatever proportional y value.


And then back to automating ExportJPEGs

Coding and GUIs

So the plan for the GUIs is to have 2 sliders, one to control the vertical height of the model and the other to control its size. There will also be a checkbox to flip the head upside down (because some heads are simply upside down).

All done, and it works nicely!

Easiest to use JPEG exporter on the market!


Now I need to manage loading content into the program, so let's do a bit of thinking..

I know I can build XNB files outside of the program. I can manually press a key (or a button on the GUI) to load the newly built files in and display them in the GUI once they have been built (so the user can export JPEGs). I wonder if this can be automated..

Found that you can watch a directory for file changes using a FileSystemWatcher, details here: http://www.codeproject.com/Articles/26528/C-Application-to-Watch-a-File-or-Directory-using-F

So perhaps I can implement a system that monitors for files and displays them in a drop down list on the GUI? And when a new head model arrives the user can click the head in the drop down list, and the program will build it, load it so the user can export jpegs, and send it to the lowry demo program.

This will completely remove the need for shell scripting to manage and move files about, but we will still use shell scripts to monitor the web server and grab files, and to make and upload gifs after JPEGs have been exported.

Now the final step if I do implement this would be to tell LowryDemo to load the next head once the head is available. Again, this can presumably be done with the FileSystemWatcher - watch the content directory for new .xnb files coming in, when they do come in then load them and add to the head model set.

FileSystemWatcher... what an utterly fantastically useful tool!

The answer has been comprehended

And now I've discovered you can run bash shell scripts from Visual Studio C# applications (actually you can run them from an MS Dos batch file (.bat), and you can run such batch scripts from C# programs). So in other words I can automate the system so that all the user has to do is to run this exportJPEGs program, and the program will automatically do just about everything:
  1. run LowryDemo
  2. run the lftp monitoring client to grab new heads from the central file server
  3. fix the model and texture files, rename them etc
  4. build them
  5. have LowryDemo automatically sense newly built files, load them and add them to the AR program
  6. load the files into this ExportJPEGS program, display them in the GUI for the user to set the view etc
  7. Let the user click the export button, which will and that will not only export JPEGs but also convert them into a gif AND upload the gif to the web server.
So, now let's see if I can get all the parts of this to work together nicely.

Automation tasklist:
  • Get FileSystemWatcher working in ExportJPEGs (return event when file added)
  • Implement ExportJPEGs to do everything up to the uploading of the gif, when user clicks export button
  • Implement ExportJPEGs to automatically run lftp monitor upon starting
  • Implement ExportJPEGs to fix model and rename regex inside fbx files when file is added to watch directory
  • Implement ExportJPEGs to automatically build models after the files are ready
  • Get FileSystemWatcher working in LowryDemo, load new head after content is built, manage model sets etc

Day 31: ExportJPEGs automation

ExportJPEGs automation

The user has to perform some fine tuning of the export jpeg parameters in order for an acceptable rotating head to be exported.

This is because every single head coming from 123D Catch is widely different in terms of its size, its orientation (luckily they are only either the right way up, or upside down, so it isn't too difficult to normalize the orientation), and its vertical position and therefore the camera in the ExportJPEGs program must be set in the right place for every head, to give consistent images overall.

The best way to manage this user input is via a GUI. The program could run and display a test image of the model, the user changes parameters in the GUI and the test image updates itself. When satisfied, the user hits the "Export" button and the JPEGs are output.

So today, I made GUIs.

Need to incorporate exporting of JPEGs into Goblin XNA so that I can use Goblin's GUI wrappers and easy to use GUI features. Trying to manually work out WinForms would take too long.

Implemented the writing to alternate render target and dumping that target to a JPEG feature. The dumped images show the GUI display correctly, but do not show the rotating shape in the centre at all.

I'm using the Scene.RenderScene() method to draw the scene to the rendertarget. Normally you use Scene.Draw() but that also updates all physics etc that you want to update on a frame, in comparison RenderScene just draws the scene as is without updating anything (which is what we want, we will be drawing the scene 36 times on a single frame, each time rotate the model 10 degrees).

After consideration, it seems best to use the original Scene.Draw() anyway, because we do update the scene graph by rotating the shape. If we use RenderScene, that does not implement updates to the graph and so would draw the same shape 36 times on one draw cycle.

To use Scene.Draw we simply need 36 frames to be rendered to the back buffer and dumped off as jpegs consecutively, and in each frame the object needs to rotate 10 degrees. To implement this I'll make a simple boolean called drawingJPEGs or something, where no update to the scene graph and no input to the GUI can be made whilst it is set to true. In addition a message saying "PROCESSING" or something will appear over the GUI, so the user knows that JPEGs are being exported.

Foamalux, Animation13 Gallery

The foamalux arrived from eBay!
I think it may be a bit more flexible than we anticipated, but we'll test it once it's cut down to size. Definitely seems hard wearing enough.

I got a little side job - processing images to upload to the new Animation13 photo gallery. Rina (Web and Publications Officer) selected 60 or so from the massive bunch we received from Chris Foster the photographer (and they really are amazing photos!), and it's my job to ensure they have the correct names, and that we have large images, preview images and a thumbnail of the correct dimensions, aspect ratio and dpi suitable for web viewing. And then to upload them to our web server on babbage.cs.man.ac.uk.

Unfortunately I couldn't log into babbage - my CS password wasn't being accepted. After further inspection I noted that I couldn't log into my CS account at all - a quick email to Tim from duty office ensued, and it should be fixed tomorrow.

Friday, 19 July 2013

Day 30: Why the Windows 7 installer is horrendous

The horror that is a Windows installation

Set about reinstalling Katy and Donald with Windows 7 on-campus, Katy did fine the install went well and all was dandy. Donald now, was another matter entirely.

First, the installer would get up to the bit where you choose which drive to install Windows on, then there it would hang and tell me a required cd/dvd device driver is missing, insert installation disk to load the driver.

This issue normally happens when you need to load the AHCI or the RAID driver for the installation (it's all to do with sata drivers and recognising your hard drives and SSD's and such). Now, I've tried using the installation disk, I've tried all of the 50 or so options for AHCI and RAID drivers on that disk, and none of them actually fix the issue and let you install windows.

However the second cause of this error is when the Windows setup installation you are trying to install from (in my case a live, bootable USB drive) is internally corrupted in some way or another. In which case reformatting and remaking the bootable usb drive fixes it.

Remaking the USB drive got me past this issue, then hit another error when I tried to format the SSD to install windows - apparently windows cannot create the partition.

So anyway, I've discovered the best most foolproof way to get windows to install when you have drive problems: boot into partedmagic or some other live partition management software (e.g. gparted directly) and set up your drives manually. You want a new partition table but leave the partition you wish to install windows on unformatted. No initializing an NTFS partition or any of that jazz, you want a fresh clean disk with only an MS Dos style partition table, but no partitions initialized on it.

Only took about 10 hours of trial and error with Hoops and now Donald to work that one out!

Promotional Video for the School

Some chaps took some quick video clips of the AR stuff in action as part of a promotional video for the University. Pics below, courtesy of Toby (thanks :))
Dude on the left has the absolute latest in camera stability gadgetry

The bouncing golf balls, result of 4 gruelling hours well spent
And the annual barbecue was also roaring away in the background as I sat typing furiously away indoors. Due to fasting commitments I couldn't join the general food splurging, and my afternoon was spent distancing myself enough far enough away from the sumptuous smells wafting through just about everywhere. Having a room without a window has some benefits then!

Back to Automation

We left off with the following tasks done:
Replacing regex within .fbx files so they reference the correct textures
Building .fbx and .jpg files via the command line MSBuild into working .xnb files
Various bits of shell scripting

And this not yet done:
  1. Some more shell scripting (moving the correct files about..?)
  2. How to manage the loading of files into LowryDemo (current method of replacing .xnb and then hitting M, Z is too dangerous and user unfriendly)
  3. How to manage the gif generation (user needs to manually adjust variables on every head to center the head model so that it rotates in the middle of the view: need to adjust zoom, Y position, Y rotation of the model before running the JPEG snapping bit)

Looked into task 2 today, managing loading of files into LowryDemo. After lots of googling, debugging, googling, sitting back in frustration, debugging and googling I've finally worked it out.

Overridden the Model Loader code to customize it because I want to load the .xnb files that I built myself - not the ones that the project is building at compile time. This means specifying a custom directory and model name for the loading. And it works - this means I can load whatever I want, whenever I want whilst the program is running! The correct head loading of LowryDemo is complete, at long last.

To do now: work out the automation of ExportJPEGs and implement it into the automated workflow
use batch files in the shell script to do the msbuild stuff on the fly
finish off batch file automation.

And that should be it!

There are also some optimizations I need to do in general that I recently learnt about, mainly adding content links rather than copying content to the content folder. This should save a good gigabyte or so of wasted space in the final program! Very welcome stuff.

Random notes I made whilst working it all out (for reference):

(also links to more great resources on the Content Pipeline itself and then how to extend it)

At compile time, all assets are processed through content pipeline and stored in a temporary file (.xnb)
Content.Load<Model>("blah") retrieves the model you specify from the content pipeline, which is given as a Model object to whatever called Content.Load.

Brief overview of Content Pipeline, 4 steps on all assets before they are used in your game:
  1. Importer: reads imported data so the content pipeline can process it. Outputs the object that the next element can process...
  2. Content Processor: converts raw data from Importer into something meaningful, what we call processed data.
  3. Content Compiler: puts processed data into a (binary) compiled asset file, an .xnb
  4. Content Loader: reads in compiled asset file, turns it into an object your game can use

Summary:

Specify Content.Load to point to your own premade .xnb files.
Name must not include an extension
Must be name of the original file (sans extension) before compiling.Not so! Simply use the name of the final .xnb file (sans extension)
Corresponding textures must be in the same directory.
CANNOT USE ABSOLUTE FILE PATHS - (this one got me for a looong while, but it does make sense - moving the code to other people's computers, you don't know where stuff will end up in an absolute sense)

Day 29: Return of MSBuild

...and I worked out what the MSBuild problem was. It wasn't recognising the assembly Microsoft.Xna.Framework.Content.Pipeline.dll apparently because I was using the wrong public key token.
Success is the ability to go from one failure to another with no loss of enthusiasm.
Opened that sample code on Winforms, and noticed they were referencing the same assembly actually within the code, and then noticed that they were using a different public key token to the one I had specified. Copy pasted their token and boom! MSBuild now recognises the BuildContent task and the fbx starts building!

So new error:
MSB4044: The "BuildContent" task was not given a value for the required parameter "TargetProfile".
 Fixed by adding the parameter TargetProfile="HiDef" in my proj file.

Next error:
1.fbx : error : Cannot autodetect which importer to use for "t1.jpg". There are no importers which handle this file type. Specify the importer that handles this file type in your project.
This happens after it starts seemingly correctly importing and processing 1.fbx (the fbx file I told it to process). 1.fbx references the texture t1.jpg which is intended, and I've placed the texture in the same directory as the model so it can see it. Now I need to somehow tell MSBuild to use Microsoft.Xna.Framework.Content.Pipeline.TextureImporter.dll, as well as the TextureImporter and TextureProcessor pipelines. How on earth am I going to do this... so this is the current project file:
<Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 
    <UsingTask TaskName="BuildContent" AssemblyName="Microsoft.Xna.Framework.Content.Pipeline, Version=4.0.0.0, Culture=neutral, PublicKeyToken=842cf8be1de50553" /> 
    <PropertyGroup> 
      <XnaInstall>C:\Program Files (x86)\Microsoft XNA\XNA Game Studio\v4.0\References\Windows\x86</XnaInstall> 
    </PropertyGroup> 
    <ItemGroup> 
      <PipelineAssembly Include="$(XnaInstall)\Microsoft.Xna.Framework.Content.Pipeline.FBXImporter.dll" /> 
    </ItemGroup> 
    <ItemGroup> 
      <Content Include="1.fbx"> 
        <Importer>FbxImporter</Importer>   
        <Processor>ModelProcessor</Processor> 
      </Content> 
    </ItemGroup> 
    <Target Name="Build"> 
        <BuildContent SourceAssets="@(Content)" PipelineAssemblies="@(PipelineAssembly)" TargetPlatform="Windows" TargetProfile="HiDef" /> 
    </Target> 
  </Project>
So I want to include TextureImporter.dll in the PipelineAssemblies.

Welp, try the obvious (added line 8):
<Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 
    <UsingTask TaskName="BuildContent" AssemblyName="Microsoft.Xna.Framework.Content.Pipeline, Version=4.0.0.0, Culture=neutral, PublicKeyToken=842cf8be1de50553" /> 
    <PropertyGroup> 
      <XnaInstall>C:\Program Files (x86)\Microsoft XNA\XNA Game Studio\v4.0\References\Windows\x86</XnaInstall> 
    </PropertyGroup> 
    <ItemGroup> 
      <PipelineAssembly Include="$(XnaInstall)\Microsoft.Xna.Framework.Content.Pipeline.FBXImporter.dll" />
      <PipelineAssembly Include="$(XnaInstall)\Microsoft.Xna.Framework.Content.Pipeline.TextureImporter.dll" />
    </ItemGroup> 
    <ItemGroup> 
      <Content Include="1.fbx"> 
        <Importer>FbxImporter</Importer>   
        <Processor>ModelProcessor</Processor> 
      </Content>
    </ItemGroup> 
    <Target Name="Build"> 
        <BuildContent SourceAssets="@(Content)" PipelineAssemblies="@(PipelineAssembly)" TargetPlatform="Windows" TargetProfile="HiDef" /> 
    </Target> 
  </Project> 
...
There are moments when everything goes well; don't be frightened, it won't last. 
Which is almost exactly what we needed, except that in LowryDemo it expects the model to be named 1.xnb and not 1_0.xnb. Annoying, but we can simply rename every model that comes out of this MSBuild and it will work straight off. At least the texture is correctly named t1_0.xnb texture.

What we have so far

Right, so here's a quick lowdown:

if our incoming folder contents changed (i.e. files arrived)
get names of new files (this is hard for some reason)
(perhaps copy new files to another folder to get processed so we don't lose the originals)
want to rename the files to the next incremented integer to ensure we dont miss any heads (1.fbx, 2.fbx etc)
want to rename any new .jpg file: cut off  "_tex_0", add a "t" so 1_tex_0 .jpg becomes t1.jpg
want to enter any new .fbx file and replace the regex stuff.
build the .fbx to .xnb
copy paste into lowrydemo program folder
load next head.

The renaming can all be done using sed, as well as the regex replacement within fbx files (tested regex replacement sed command, all seems to work perfectly). The build of fbx to xnb has been sorted too now that I've enforced my will upon MSBuild.

One issue now is what should really be the "easy" bit, just the general moving files around. Getting the names of the new model and texture and then transporting them to be built by MSBuild, transporting the .xnb's to the correct folder inside the bowels of LowryDemo's content and stuff like that.

The more important issue is how to deal with loading the compiled xnb files into the AR demo (LowryDemo) program. The current placer heads and copy pasting xnb's can be automated but it's a bit fiddly so I'm looking into content pipeline extensions as well.

Another reinstall

There was an issue with activating the Windows 7 install on Katy, Donald, Walter and Hoops. In order to fix it we are going to reinstall the PC's with an On Campus version of the 64bit Windows 7 disk image provided by the University.

So yes, starting with Donald I'm going to be reinstalling absolutely everything... again. Sounds like fun! I'm so good at this now I can probably write a script that presses the correct keys at the correct seconds to completely automate the install of Windows, Visual Studio, Goblin XNA, 3DS Max, Camtasia, etc..