Tuned up the mbox file importing on Thunderbird tutorial (https://docs.google.com/document/d/1voSf8PuXz7jo-vVYOxHayYHU5lzqZCRysqrN-a46MfA/edit?usp=sharing), added Windows and Linux specific sections.
Continued documenting the AR system.
Finished overview of ExportJPEGs, the hard part starts now - detailed explanations :)
Vacation Student at the University of Manchester one summer long ago.
SyntaxHighlighter
Friday, 30 August 2013
Thursday, 29 August 2013
Day 53: Doc
Documentation
Finished off working on Lowry Demo and moved on to the documentation. Did some final tweaks to the code to get everything looking a bit more professional.A future improvement came to mind as soon as I started the documentation: It would be a great idea to allow users to tweak certain selected global variables of the program using a config file. This system is already in place with a config.txt file that modifies marker information and such, but I plan to update it to use XML which is far more flexible and I hope I will eventually be able to extend the XML config file to allow users to automatically add new objects by simply adding relevant information to a central XML file.
However for now, to save time (time really is flying - only two weeks or so of work left!) I shall simply list this as a future development plan in the documentation.
Started it off, stored it in my Google Drive: https://docs.google.com/file/d/0B-L96HHcA2JTQ2J2SDJTaUV0clE/edit?usp=sharing
(anyone with the link can view and comment on the documentation)
Email and stuff
Learnt that IMAP on Outlook is currently down, which is why I couldn't log in previously. Hopefully it'll be fixed soon, and then I can finish off Steve's jobs.
Wednesday, 28 August 2013
Day 52: IMAP and Outlook
Microsoft, the future of email
So the mbox work was successful, but unfortunately before the tutorial can be completed we need to actually set up Thunderbird to use the new Outlook 365 mail system. Steve did mention I document doing this as another of my tasks, but I completely forgot - so I set about doing it today. It proved impossible.Now so far I have been logging into the new Outlook mail system directly via the student portal. This involved logging into the portal, and then clicking the "EMAIL" button, which took me directly to my Outlook 365 inbox. There was no actual login screen for Outlook itself.
So that's all well and good, but when I tried to access the outlook web app directly and log in, no combination of username or password would work. The point of the new change, as Steve explained, is so that everything will use a single username and password. And that would be the Central University one, i.e. the 8 character ID every member of the University has, starting "mba_____" or something similar and it's corresponding password.
Well the Outlook web app login page does not like firstname.lastname@student.manchester.ac.uk, nor does it like the 8_digit_central_ID@student.manchester.ac.uk. There is currently no way to access that mail account other than logging into the Student Portal (i.e. MyManchester) and going via there.
The problem with this is that I do not have a username or password to give to Thunderbird to have it fetch my Outlook mail. From within the Outlook 365 web app, you can view the IMAP settings (server hostname, port, SSL etc). It also tells me that my ID is muhammad.mahmud@student.manchester.ac.uk. Now within Thunderbird I have set up the IMAP and SMTP settings correctly (it tells me all as well and even told me that it found port 23 on the SMTP to be a good 'un so that seems to be working), but when it attempts to verify username and password it simply fails.
So in a nutshell, I don't know my Outlook 365 username and password!
For one thing, I understand that the University does not want students to be able to change their Outlook password from within Outlook 365 itself at any time, because then that won't sync with their university password which will cause all sorts of problems. But I need some way of knowing my Outlook login credentials so that I can bypass the University Student Portal and get my mail forwarded to Thunderbird.
Another interesting occurrence, student portal now no longer logs me in to my Outlook account because I've entered the password incorrectly too many times - now it comes up with the captcha screen (input these characters to prove you aren't a robot) where it also requires me to type my password again. But I don't know my password, because Student Portal was handling all that automatically! So there is no way for me to get into that account until Outlook resets itself. Actually I can just use another browser, but the point remains!
AR
Fixed a few bugs, now sound is correctly re-enabling itself when switching back to a model set from another model set whilst the marker card is still in view throughout. I think the sound system is pretty much done! Only thing to do now is to find more sounds and add more sounds. Working on the bouncing golf balls, intend to program them so that the balls make the bounce sound when any ball bounces. Should be realistic and fun!
Day 51: Not a Monday
Admin
First things first, yesterday being a bank holiday has resulted in my precisely tuned day numbering scheme being thrown utterly off balance. As I did not work yesterday, it can barely count as Day 51. So to clarify, today (Day 51) is a Tuesday, and now Friday will land on Day 54 (rather than having 51 being a Monday and 55 a Friday as would usually occur).IMAP, mbox files, Thunderbird
Worked out how to get Thunderbird to read mbox files (which was one of my Steve Pettifer missions) without being connected to any IMAP account. Documenting the process into a google doc: How to import old mbox mail.AR
Finished coding a first draft of the sound system and it works! The dude walking, ship through fire, clock, buggy and heads all have lovely sound effects playing when the cards are visible. After fine tuning the code a bit more, the effects now successfully disappear when the cards are removed so all looking good.
Tuesday, 27 August 2013
Day 50: Resounding progress
The sounds I have need to be split into the background sounds, and the sound effects. Both have different characteristics and logically should be implemented in different ways.
Background music and such can be imported directly as MP3 or WMA, and then played, paused, stopped etc.
Sound effects should be handled by creating an XACT project from WAVs, and then importing the XACT project into the AR program. 3D positioning and attenuation can be applied to the effects as well as looping counts (up to infinite) and they are instantiated as Cue objects and then played at the appropriate times.
I've decided to classify the sounds into the two groups based on their usage. If a sound needs to have 3D effects (including attenuation), or is intended to be played as an effect (i.e. played once and forgotten) then it's an effect. Otherwise it's background music.
Background Music:
guitar music
breathe through gasmask
(various insect sounds)
crackle fire
flamethrower
lear jet engine panning
ticking clock
swam buggy revving
spooky wind
Effects:
golf ball in sand
gravel crunching
Finished the code regarding sound effects for when the heads are being shown. It now plays eerie background sounds whenever the current model set is a head set, and correctly stops all other sound effects regardless of what marker cards are showing (e.g. boots walking, fire sounds etc from other models), starting them off again when we switch the current model set back to a non-head model set.
This was made efficient by putting the code to enable or disable a model (non-head) sound effect into one method that takes a variable specifying whether the sound should be enabled or disabled, declared thusly:
And something like this deactivates all current sounds with a bang:
(Um the bang bit is a joke)
Okay all looking good. Adding some simple code to play correct effects for the non-head models now.
The email address guinea pig thing is underway, my account is done I'm now on the Office 365 system and have some lovely tasks involving documenting how to set up email on Linux and Windows machines, complete with screenshots, videos if necessary and all the general stuff. Should be fun.
Background music and such can be imported directly as MP3 or WMA, and then played, paused, stopped etc.
Sound effects should be handled by creating an XACT project from WAVs, and then importing the XACT project into the AR program. 3D positioning and attenuation can be applied to the effects as well as looping counts (up to infinite) and they are instantiated as Cue objects and then played at the appropriate times.
I've decided to classify the sounds into the two groups based on their usage. If a sound needs to have 3D effects (including attenuation), or is intended to be played as an effect (i.e. played once and forgotten) then it's an effect. Otherwise it's background music.
Background Music:
guitar music
breathe through gasmask
(various insect sounds)
crackle fire
flamethrower
lear jet engine panning
ticking clock
swam buggy revving
spooky wind
Effects:
golf ball in sand
gravel crunching
Finished the code regarding sound effects for when the heads are being shown. It now plays eerie background sounds whenever the current model set is a head set, and correctly stops all other sound effects regardless of what marker cards are showing (e.g. boots walking, fire sounds etc from other models), starting them off again when we switch the current model set back to a non-head model set.
This was made efficient by putting the code to enable or disable a model (non-head) sound effect into one method that takes a variable specifying whether the sound should be enabled or disabled, declared thusly:
private void processFoundModelAudio(int foundModelIndex, bool Enable)
And something like this deactivates all current sounds with a bang:
for (int i = 0; i < MarkerNodes.Length; i++) processFoundModelAudio(i, false);
perform_bang();
(Um the bang bit is a joke)
Okay all looking good. Adding some simple code to play correct effects for the non-head models now.
The email address guinea pig thing is underway, my account is done I'm now on the Office 365 system and have some lovely tasks involving documenting how to set up email on Linux and Windows machines, complete with screenshots, videos if necessary and all the general stuff. Should be fun.
Friday, 23 August 2013
Day 49: A sound call
The happenings @ Kilburn
I realised that the plan described yesterday (for managing sound) isn't the best solution, because in reality we want to do audio processing when either a. something comes into view after not being in the view and b. something goes out of the view after being visible in it. Otherwise no changes are made, any audio playing just keeps playing, and vice versa. The method yesterday, actually calls the audio processing script for a marker on any frame during which that marker is seen. It doesn't know if a marker that was in the view in the previous frame has since moved out (i.e. the b. case above).
So here's a better system:
And then we have the switch statements in the processFoundModelAudio method to determine exactly which model it is. Lovely! Continued coding this stuff into the AR demo.
Georgiana finished off the Pi's, all 278 of them today. Turned out we are short on 48 or so power supplies, we've seemingly lost track of a box or two! The rest of the Pi's are all packed and ready to go.
Also noted that we were horribly short on card readers, only had about 80 of them. So Steve decided since the school lab PC's will all come with card readers anyway, we won't give the students any card readers with their Pi's. Eventually I had to go over to the AIG lab and remove all the existing card readers from the bags... a gruelling display of manual labour indeed.
Also managed to lock myself out of my CS account. Again. Password wasn't working! Similar thing happened about a month ago when I was uploading pics for Rina and needed to use my CS credentials, noticed I couldn't log in. Whatever the problem was that time, Tony Curran walked me through resetting my password and all was well. He also mentioned (rightly so) that I really should have gone through the ARS system rather than just turn up at his door with an email from Rina as the only announcement (he really is a busy chap what with managing the IT of the whole building/school/whatever) - so this time I smugly submitted an ARS request... and then completely missed Tony who came looking for me when I was out for a late lunch! His office was locked afterwards so looks like this will have to wait till tomorrow.
// Check if any model has just appeared or has just disappeared. for (int i = 0; i < MarkerNodes.Length; i++) if (MarkerNodes[i].Enabled && ( // Marker has just left the view (WasFound[i] && !MarkerNodes[i].MarkerFound) // Marker has just come into the view || (!WasFound[i] && MarkerNodes[i].MarkerFound))) { // If it's just come into view then enable sound, otherwise disable sound processFoundModelAudio(i, MarkerNodes[i].MarkerFound); // second parameter specifies enable or disable WasFound[i] = MarkerNodes[i].MarkerFound; // update for next time }
And then we have the switch statements in the processFoundModelAudio method to determine exactly which model it is. Lovely! Continued coding this stuff into the AR demo.
Georgiana finished off the Pi's, all 278 of them today. Turned out we are short on 48 or so power supplies, we've seemingly lost track of a box or two! The rest of the Pi's are all packed and ready to go.
Also noted that we were horribly short on card readers, only had about 80 of them. So Steve decided since the school lab PC's will all come with card readers anyway, we won't give the students any card readers with their Pi's. Eventually I had to go over to the AIG lab and remove all the existing card readers from the bags... a gruelling display of manual labour indeed.
Also managed to lock myself out of my CS account. Again. Password wasn't working! Similar thing happened about a month ago when I was uploading pics for Rina and needed to use my CS credentials, noticed I couldn't log in. Whatever the problem was that time, Tony Curran walked me through resetting my password and all was well. He also mentioned (rightly so) that I really should have gone through the ARS system rather than just turn up at his door with an email from Rina as the only announcement (he really is a busy chap what with managing the IT of the whole building/school/whatever) - so this time I smugly submitted an ARS request... and then completely missed Tony who came looking for me when I was out for a late lunch! His office was locked afterwards so looks like this will have to wait till tomorrow.
Pi's lined up in battle formation. Katy doesn't stand a chance. |
Wednesday, 21 August 2013
Day 48: New environments
Relocation
Shifted Donald over to my original office next to Toby's because I frankly half killed myself the other day by working in the AIG lab where the temperature must have been around 18 degrees. The fact that the lab is cooler than other places in the building is great when it's a hot summer's day, but with the temperature dropping off a bit lately it was too much to handle.
Walter's tower is still in the lab, his screen is with Georgiana to connect to the master Pi so she can run the Pi labelling (which is going very well indeed, almost finished I believe! All 250 Pi's) and so is the ethernet cable. Which means I am back down to a single PC, a humbling prospect but nevertheless when times are harsh we must make do with what we have.
The card readers ran out so the Pi labelling was being inconveniently held up, have asked Steve to investigate.
AR
Made a little list of the models assigned to each MarkerNode. The first model refers to the model shown in ModelSet1, the second one refers to ModelSet2 (i.e. when 1 or 2 is pressed on the keyboard):
* 0 Sofa, Ant* 1 Marvin, Police
* 2 Gingerbread, Diskette
* 3 Dude, Bird
* 4 Mosi, Turbosonic
* 5 Dragon, Creeper
* 6 Osaka, Buggy
* 7 Fire, Golf
* 8 Guitar, Basketball
* 9 Clock, Globe
Using this information, and setting a good value for MaxDropouts, I can determine exactly when each model is showing using the isFound method and checking the active Model Set. Then it's simply a matter of playing the right sound at the right time.
I plan to use a for loop to determine whether models with sound are up or not. Then pass them to a method which uses case statements to do the rest:
Loop through all models, if its found then call method processFoundModelAudio:
method processFoundModelAudio takes integer ModelFoundID
if modelset 1 active, perform case statement 1 else perform case statement 2 else heads are visible so process them.
Case statement 1 and 2 switch on ModelFoundID, has cases 0 - 9 for each model with processing info for each.
To play sound cues in xact project use: http://rbwhitaker.wikidot.com/3d-audio-effects-location
To play background music: http://rbwhitaker.wikidot.com/playing-background-music
Attenuation effects: http://rbwhitaker.wikidot.com/3d-audio-effects-attenuation
Sounds great!
Day 47: Marks, get set...
GO!
I passed the Pi's and their script off to Georgiana, who proceeded to rampage through the first 15 of them in 45 minutes (3 minutes a Pi is about right, it takes about that long for one of them to boot up and have it's MAC registered).So today we just want to time and get a feel for how many a single person can get through in how long, so that we know if I need to also join in the labelling before the end of the week (or if we can just leave Georgiana to toil alone).
Email testing
I'm also the proud guinea pig (and canary) of the new email infrastructure. Basically Tony from IT Services is going to switch my address over to the new system as a test case, and I report a few things to him and then make a tutorial about how to set up the email system on Windows and Linux and even on my Android smartphone just because I can. Said tutorial will include lots of screenshots.
QandA to Wiki
In previous years the CS QandA site has not really been very popular, even though it is actually a great place to communicate with the staff and other students collaboratively and solve common problems. So this year it has been decided that there will be a collaborative staff and student wiki, with the hope that this tech will garner increased popularity and have more success than the original QA site. Another one of my tasks is to go through the QA site and check for any substantive or useful questions that could be used to seed the new Wiki. Steve Pettifer asked me to go through for an hour and just copy any good discussions into a google doc so that he can determine if porting such questions onto the Wiki is actually a good idea or not.
Here's the doc:
https://docs.google.com/document/d/14Kf5__sG9qurGWexaakkGVYIuToDacSBLencf09j6TE/edit?usp=sharing
Here's the doc:
https://docs.google.com/document/d/14Kf5__sG9qurGWexaakkGVYIuToDacSBLencf09j6TE/edit?usp=sharing
AR
Carried on some more general code fixes and mass commenting!
Monday, 19 August 2013
Day 46: Bugs
Donald fixed, the future of debugging
Got ethernet on Donald working again, and went on a wild goose chase trying to find the right drivers for this annoying unmarked WiFi card. Finally found them, WiFi works, all good to go.Back to AR, upon every model acquisition there was some code that read in the marker nodes referenced by the config.txt file and assigned them to the model. Previously this code had been repeated at the start of every demo add method which was the cause of a lot of redundancy. I moved all the code into one place a single array that iterated and did everything in a choice few non-repeated lines. It is all done during the marker tracking initialization method. Unfortunately since doing this, none of the models are showing up on the markers when the demo is run. So today I set about fixing it.
This is a great example of how easy it is to debug when you have two machines that can run the same code. I had the older working code debugging on Walter and the new broken code debugging on Donald alongside. It allowed me to compare variables directly, observe what differences in execution my code changes made, and various other stuff. I'm somewhat surprised that debuggers do not offer this functionality actually (or maybe they do but I just haven't found it), of debugging through two code versions simultaneously. Was immensely useful to me!
So first off for the new working code on Walter, in debug mode I checked out the complete MarkerNode's that were being added to the scene graph after the code completed. I then checked the same variables on Donald debugging the non working code, comparing them to check for differences.
And the answer was immediately clear, my new code was resulting in two variables remaining null: MarkerNode.markerConfigs and MarkerNode.MarkerID
Whereas with the working version these variables were set as follows:
markerConfigs = { [object 2] }
MarkerID = "AlvarConfig6.txt"
Or if the marker node was a single marker and not a marker array then:
markerConfigs = { [object 1] }
MarkerID = 86
And after some step-through action, I discovered that in the new code my scene.MarkerTracker was null whilst the markers were being assigned to it. Oops. A quick rearrangement of code, and all now works perfectly.
Rearranged and organised markers in a sensible order, adding them to the arrays in a natural manner.
Fixed some bugs, for some reason ship flying through fire demo wasn't showing, I realised I had a firePlacer that was being disabled at odd times not too sure why. Removed the code now it works.
Continuing coding sound. Going well.
Day 45: Ssh vs read-only, finished maclog.py, back to AR
The final touches: safe shut down
So there were two possible solutions, either use ssh or make the SD read-only. You can ssh into the Pi via python using Paramiko (http://jessenoller.com/2009/02/05/ssh-programming-with-paramiko-completely-different/) and then call "sudo halt" to shut it down, but instead I chose to go the other way and modify the Raspbian distribution on the Pi, forcing it to mount as read only. I had to make changes to a variety of files, as described here: http://www.raspberrypi.org/phpBB3/viewtopic.php?f=29&t=22596Testing
I tested it in a variety of situations to hopefully ensure my script is robust enough to withstand all manner of unexpected abuse. Added lots of text printing to walk the user through every single step, error catching to deal with annoying issues. Added a keyboardinterrupt catch so that it would catch a Ctrl-C to exit the program, and then closes the log file and manually shuts down properly.
I created master and slave SD card images (with the Unix dd command) as the most foolproof backup I could think of. Each image is just under 500MB
All finished version 1 description:
There is a Master Pi with the script and log file on it. Connect it to a screen and keyboard and run the script. Follow onscreen instructions to work out how to use the script, very self explanatoryYear is passed as command line argument
It looks for log file at startup, and asks to create one if not found.
If it is found it starts the sequence at the correct number for this year.
Any new mac address discovered that is not already in the log file (regardless of year) will be added to the end of the file on a new line, along with its sequence number as such:
b8:27:eb:6a:ca:84,13/0002
(MAC,yy/nnnn)
This will also be printed to the screen so the user can write the sequence on the Pi as required.
If the new mac has already been previously logged (regardless of year) then error message will print existing sequence, and script will continue (waiting for next mac) without relogging.
It takes a pretty long time after a new Pi has been connected for it to be recognized by the script. In all situations meaningful messages walk the user through what is happening.
There is a specially set up SD Card with IP address 192.268.0.2 and which will mount its OS as read-only. This SD card will be put into all of the ‘Pi’s to be labelled’ one by one as they are connected to the Master Pi with a direct ethernet connection.
When connecting new slave Pi, always connect it’s power cable last. For obvious reasons. When disconnecting can do any order (a socket error occurs if you disconnect the network cable before the power cable for some reason, but my awesome script catches that, deals with it safely and continues)
Pi labs
Also read through the bunch of other labs at Grahams request, just to ensure everything is up to scratch. They are all located here: http://studentnet.cs.manchester.ac.uk/ugt/2013/COMP10120/labscripts/
AR
And back to AR! Donald no longer connects to the internet ever since I started messing about with a bridge connection on the Ubuntu VM that I installed - or it could be some other reason I'm not sure. In any case the ethernet connection no longer works and I don't know which drivers will make his wifi card work... it doesn't even have a brand on it! I'd better figure this out or the next event when Toby tries to use Donald in some random location there will be an unpleasant surprise in store as he discovers the WiFi doesn't exist...
Next Monday I believe the Pi's will be passed to Georgiana who will go through the actual labelling process and running the script etc
Next Monday I believe the Pi's will be passed to Georgiana who will go through the actual labelling process and running the script etc
Friday, 16 August 2013
Day 44: I once was a Python programmer..
Python
When they said Python was easy to learn, they weren't joking! I haven't actually programmed very much in Python before, so I was learning quite a bit as I progressed through writing the script. It is in fact a very quick to program and pretty easy to use language, with enough power to do wonderous things. Useful tool in my programmer's arsenal.So today was spent just writing basically the whole script, encompassing all of the functionality intended apart from the last issue: ensuring the slave Pi's can be disconnected without corrupting the OS. This can be done by either making a non-writable SD or alternatively logging into the Pi and shutting it down via ssh. It is not enough to simply flip the read only switch on the SD card, the Pi will still mount itself with RW access. I'll fix this first thing tomorrow!
Also read through the Pi lab at http://studentnet.cs.manchester.ac.uk/ugt/2013/COMP10120/labscripts/rpi1.pdf and suggested a few improvements to Graham. Mainly I thought there was a bit too much of a "wall-of-text" effect and not enough enforced structure. Which makes it somewhat difficult to follow instructions as they are embedded as any number of sentences within paragraphs; and makes it somewhat easy to lose your way as you flip from the document to your Pi. An improvement would be to enforce some structure with bullet points, or numbering instructions to make them easy to follow and provide anchors in all the text so you can find your place easily.
Thursday, 15 August 2013
Day 43: ARP'ing about
Pi
Got the Pi problem on the move with Steve Pettifer's help. First we tried to arp-scan a Pi via a Ubuntu VM on a Windows Host, but the idea of trying to navigate the virtual network adapters to route the VM to the Pi was simply daunting and somewhat unnecessary.So I used the Mac conveniently located behind me in the AIG lab instead, as Steve had previously successfully arp'ed a Pi from his Mac so we should know what we are doing. However even following Steve's tried and tested method did not work, the arp-scan still failing. Apparently the OS on this Mac treats internet connection sharing differently to the one on Steve's Mac.
So then I connected two Pi's together, one a master and the other a slave (i.e. it represents all the other Pi's that are going to be connected one at a time). Installed arp-scan on the master, tried it, but still no luck.
Finally we got David in to help out, and soon discovered that getting arp-scan to work meant we had to fully configure IP addresses for the master and slave Pi's. Luckily we are using the same SD card for all the slaves so we won't need to reconfigure anything each time. There isn't any DHCP device set up, so it is simpler to just statically assign IP addresses. This was achieved my modifying the /etc/network/interfaces file, replacing the line (on the master Pi):
iface eth0 inet dhcp
with:
iface eth0 inet static
address 192.168.0.1
netmask 255.255.255.0
gateway 192.168.0.2
and similarly on the slave Pi (though of course the address and gateway will be different).
Now finally with a working arp, I set about coding the Python. Installed scapy (libraries for arp'ing and other useful network related stuff) and it's dependencies as per the website instructions.
In the Python script I noticed you need to first import scapy.all and then call from scapy.all import ... For some reason it doesn't work if you try to import the required bits directly.
Decided on this design specification for the function:
It will check for any unique new Pi being connected to the Master Pi via direct ethernet (Master and the new Pi must be using specifically modified SD cards that assign them the correct IP address) and whenever it finds one it will log it's MAC address and a newly generated sequence number to the log file. They will be on the same line, seperated by a comma e.g.
b7:56:d3:ff:32:11,13/0001
It will print useful info to the user as it runs, e.g. MAC and sequence for any new Pi, or a message saying that the Pi currently connected has already been logged. In addition it will support being stopped and restarted but still remembering which sequence number it was up to. In additional addition it will require a command line argument which is the current year, so that it can output the correct sequence number.
Sounds fun.
Wednesday, 14 August 2013
Day 42: Raspberry Pi MAC addresses and stuff
Currently stuck with LowryDemo. Moved all the repeated code that was assigning models to marker nodes into one for loop in the setupMarkerTracking method instead. To be honest that's where the code belongs, so I'm saving a lot of repeated code. However after doing this, now no models show up on markers when the code runs.
After further debugging, it seems no markers are being tagged as found, even when I hold them up to the camera. Will study the new code more closely and see if I can work out what the issue is.
After further debugging, it seems no markers are being tagged as found, even when I hold them up to the camera. Will study the new code more closely and see if I can work out what the issue is.
Pi labelling
Graham contacted me with a little job involving uniquely identifying and labelling the new Raspberry Pi's with sequence numbers before they are distributed to the first year students. Basically I need to write a script that will run on a standalone non-networked laptop that will monitor for Pi's being connected, and when one is found connected it will then log the MAC address with the appropriate sequence number.
This sequence number will then be written on the Pi's case with a permanent marker pen.
So step one, find out how computers are being used to connect to Pi's and what writing a script to do that would involve!
Monday, 12 August 2013
Day 41: Back in business
Back
The holiday was good! Arrived back in Kilburn to carry on the AR work, and leapt promptly back into the coding of sound.
Found out how to check markers to see if they are in the view or not, and also found out how to set a buffer number of frames. If the marker has been seen within the last x frames, then if it is lost for whatever reason (the marker moves out of the view, or it blurs so the camera doesn't capture it for a few frames, etc) it will still remain found until the buffer expires or the marker is seen again.
Useful so we don't have choppy audio. Am coding it in now.
Tasks to focus on in the near future
Apart from the implementing sound that I mentioned above, there are a fair few loose ends to tie up in order to get the AR up to scratch:
- Documenting entire new system
- Test AR installation on a new system - Hoops, to be precise. Ensure portability of AR.
- Add handling to models that don't load, user can see error message, attempt reloads, etc
- Make a system for people to easily add models? GUI?
- more that I can't remember right now
So there is quite a bit of stuff to be working on.
This was Toby's last day before flying out to Japan for his holiday, so there was a tearful goodbye. He'll be back in a few weeks though, and in the meantime I've been asked to help out with general programming tasks for the school with Graham Gough, John Latham and Steve Pettifer, so looking forward to some new stuff too.
- Documenting entire new system
- Test AR installation on a new system - Hoops, to be precise. Ensure portability of AR.
- Add handling to models that don't load, user can see error message, attempt reloads, etc
- Make a system for people to easily add models? GUI?
- more that I can't remember right now
So there is quite a bit of stuff to be working on.
This was Toby's last day before flying out to Japan for his holiday, so there was a tearful goodbye. He'll be back in a few weeks though, and in the meantime I've been asked to help out with general programming tasks for the school with Graham Gough, John Latham and Steve Pettifer, so looking forward to some new stuff too.
Friday, 2 August 2013
Day 40: Sound.
ExportJPEGs bug
Fixed a bug in ExportJPEGs, where the FileSystemWatcher would call the "fbx found" event handler multiple times for the same fbx models in the incoming folder. Realised it's because of the regex replacement I do on every fbx found, this triggers the FileSystemWatcher as I've set the FileSystemWatcher to check for changes in files.
Unfortunately the reason I cannot just simply stop the Watcher from detecting file changes is that I am using a side effect of file changes to detect files. I have a touch method for when we want to load existing models in the monitored directory, that changes the creation time of every file - thereby forcing the files to be noticed by the Watcher.
You need to do this to load existing files on startup because there is no way to get the Watcher to see files that are already there.
So now I've edited my touch method, instead of resetting creation time on every file and having the Watcher detect a file change, it now renames every file in the folder and has the Watcher detecting renames. A complicated bit of regex now renames any filename to start with a full stop. If the files already start with a full stop, it replaces the full stop with a comma. In this way the user can also tell which files have been loaded because they pre existed, as all those files names will start with a comma or a full stop.
So now I've edited my touch method, instead of resetting creation time on every file and having the Watcher detect a file change, it now renames every file in the folder and has the Watcher detecting renames. A complicated bit of regex now renames any filename to start with a full stop. If the files already start with a full stop, it replaces the full stop with a comma. In this way the user can also tell which files have been loaded because they pre existed, as all those files names will start with a comma or a full stop.
VMs and AR
Following on from yesterday when I was determining how to move the AR to a fresh machine so that it can run the demos successfully.Okay, worked out what the issue was on the Virtual Machine and why it wouldn't load the project files: apparently you need to install XNA Game Studio in order to load .contentproj files. Installed the redistributable framework package and then tried to run ExportJPEGs.exe, and it went rather well.
Got up to the point where it told me it couldn't find a Direct3D device. It seems that VirtualBox's virtual graphics adapter isn't good enough to run our AR programs (which are tagged as HiDef Direct3D apps). I don't see this working in a VM any further than this, so the next step will be testing it on a real machine. Katy Donald and Walter are all fully prepped and installed for Visual Studio XNA development, so I can't use them, which leaves Hoops - which is currently in use by Georgiana. So in that case I think I will use Walter or Donald.. just uninstall everything related to AR off one of them and then continue from there.
In any case, the purpose of this exercise was in order to determine the minimum software installation we require in order to run the AR applications, so it is not entirely essential. I already know that we can definitely get the AR to work by moving about 1GB of source code and models, and installing a bunch of dev packages on the target machine. And now via the testing up to this point (i.e. up to until the Virtual Machine failed me), it seems the AR could work with a bare minimum which includes the following:
- 100MB binary files and compiled models etc, all in one folder
- .Net 4.5 runtime installer
- XNA 4.0 runtime
- VC redistributable package
If that is indeed all we need then we're in pretty good shape, it's pretty portable stuff.
Sync errors
Tested ExportJPEGs today, and noted that some models were failing to build for some reason. After further inspection I found that if I uploaded a whole bunch of files at once to the web server, they downloaded somewhat slowly back into the incoming directory. This meant that in some rare situations, the model would be present in the incoming model folder but not the texture. So it would fail to build because the texture can't be found.
To fix it, I added a test to check for the texture, and if its not there wait 60 seconds. If it's still not there, print error message, abandon loading that model and carry on with the program. The GUI tells the user which model is being loaded, it displays a live countdown when a file cannot be found, and displays the path of the texture it's looking for and which model it belongs to as the count occurs so if the user knows what they are doing they can attempt to fix it.
A further improvement could be to add models that could not load to a section on the GUI, and allow the user to click on them, to attempt a reload on them, and to view error messages and such about why they could not load. I will implement this later on as time permits.
To be honest this whole thing wasn't exactly a simple task and required some devastatingly interesting regex usage, for example to extract the name of the texture from inside the model file itself:
String texPath = Regex.Match(File.ReadAllText(e.FullPath), @"[rR]elative[fF]ile[nN]ame: ""(.+)""").Groups[1].Value;
And then I appended that to the model path to get the texture path in full... and other interesting nuances :)
But enough of that, moving on! On to a concept that has long awaited its time to shine. And well, now that time has finally arrived. Yes, I'm talking about..
.. |
s'what I'm talking about |
Sound
Found some very nice sound samples online for various effects, and set about collecting them.
The latest version of XNA can handle mp3, wma, and wav files directly, and has some very useful tools for 3D sound; plus allows full control over playback (looping, pausing, stopping, playing, continuing, etc).
I ran into a roadblock very soon when trying to implement it though, Goblin XNA does not provide any way of telling the user when a marker card has been found! The issue is because I want to only play certain sounds when certain demos (i.e. certain markers) are in the view - and those sounds should stop when those demos are removed from the view. Now the only way to do this is to manually check for every single marker card in the Update() function and see if its visible! And add my own code to implement a buffer time for when a marker card is considered in view and when its considered out of the view - because the camera does lose track of markers for miniscule (and longer) periods of time when the cards are moved or when the marker is shortly obscured by light or something. We don't want stuttering audio starting and stopping at every movement of the card!
Trying to get around this now, but the current code is looking uglier by the second! Am studying the Goblin source code to determine if there are any obscure methods I can use to make this more intuitive.
Aaaaaaaand I'm taking the next week off on holiday (surprise!) for religious observances (Eid and Ramadan) plus general relaxation, so I'll be getting back to this stuff a week on Monday all being well. Planning to get started on my MEng report too after the time off so more stuff to look forward to (cough).
And in other news Toby just confirmed my 3 week extensions for this vacation placement so I get to spend even longer partaking in more furious coding fun! Great news, got to admit this vacation has been super productive, really enjoyed myself and learnt an absolute ton.
The latest version of XNA can handle mp3, wma, and wav files directly, and has some very useful tools for 3D sound; plus allows full control over playback (looping, pausing, stopping, playing, continuing, etc).
I ran into a roadblock very soon when trying to implement it though, Goblin XNA does not provide any way of telling the user when a marker card has been found! The issue is because I want to only play certain sounds when certain demos (i.e. certain markers) are in the view - and those sounds should stop when those demos are removed from the view. Now the only way to do this is to manually check for every single marker card in the Update() function and see if its visible! And add my own code to implement a buffer time for when a marker card is considered in view and when its considered out of the view - because the camera does lose track of markers for miniscule (and longer) periods of time when the cards are moved or when the marker is shortly obscured by light or something. We don't want stuttering audio starting and stopping at every movement of the card!
Trying to get around this now, but the current code is looking uglier by the second! Am studying the Goblin source code to determine if there are any obscure methods I can use to make this more intuitive.
Aaaaaaaand I'm taking the next week off on holiday (surprise!) for religious observances (Eid and Ramadan) plus general relaxation, so I'll be getting back to this stuff a week on Monday all being well. Planning to get started on my MEng report too after the time off so more stuff to look forward to (cough).
And in other news Toby just confirmed my 3 week extensions for this vacation placement so I get to spend even longer partaking in more furious coding fun! Great news, got to admit this vacation has been super productive, really enjoyed myself and learnt an absolute ton.
Day 39: Documentation, Installing AR on a new machine
AR
Finished off the instructions for the 123D Machine operator.Also installed a new Virtual Machine on Donald in order to test what software is required to install the AR on a new machine. We may want to run LowryDemo or ExportJPEGs on a new machine (other than Donald/Walter/Katy) in future, and therefore would need to know exactly what is required to get them to run. Ideally you wouldn't want to install the full Visual C# Express 2010 developer package!
All you need to run the programs on a PC with everything (i.e. all the Visual Studio and XNA dev stuff) installed is the bin folder, the data folder (not models or textures though, just shaders and fonts), and the dlls folder.
I copied these to the new clean virtual machine, and started testing.
First run of ExportJPEGs: force close, requires .net 4.5
Installed .net 4.5
Second run: force close no meaningful error
Installed Visual Studio 64bit redistributable package
Third run: force close no meaningful error
Installed VS 2010 C# express, and Service Pack 1. Intend to open the project in VS and run a debug to see why it isn't working, then we can determine which runtimes we need.
Can't open project with VS 2010, apparently the contentproj file is not supported by this version of VS.
Now I'm running through hundreds of Windows Updates to update VS to the latest version.
Sound
Toby refreshed the idea of incorporating more sound into the AR - because at the moment it's a bit lifeless. I was looking into it previously, but it got overshadowed first by porting the code to XNA 4.0, and then after that by Animation13, and then straight after that I jumped into automation and working on sound always seemed to get pushed aside. But no longer!
So, some interactive sound would definitely liven up the program. For example boots on gravel when the man walks about, crackling fire and a spaceship engine sound with the ship flying through the loop demo. Horrible creepy insect sounds when the ant shows up. Strange eery music when we get the heads up?
The new sound features in the new Goblin 4.1 should make this process far more fun than it was in the old version.
Subscribe to:
Posts (Atom)