SyntaxHighlighter

Wednesday, 31 July 2013

Day 38: OpenGL bugs, new PSU, 123d-AR plan, documentation

Plan of the 123D-AR demo

How it's all going to work! Used OneNote to splash out the basic draft of how everything works and sent it to Toby for review. It's a little muddled but hopefully sensical enough.

123D

Saw this 123D capture from Animation13, which interestingly managed to grab the television screen:
Good one
Interesting. Anyway..

I found you can actually upload multiple 123D captures simultaneously, just run multiple instances of the program and they all work independently. This will save a lot of time and allow us to get through many more 123D -> AR capture processes at events.

You just need to remember to make sure you are signed in on all of the 123D programs running simultaneously. If you aren't signed in you can't export models.

OpenGL 3.3

Helped Georgiana fix a bug issue with OpenGL 3 today, she's been porting the older OpenGL 1 lab code to the newer version to make it a little more up to date. Previously, the code to draw primitives (like a circle) would take points sequentially so you could have a loop that generated the next point to draw on every iteration and passed it to the function. The new version of OpenGL however, takes an array of complete points at once (and as per standard C, you tell it how many points from the array to wish it to draw).

Attempts to pass points into the new functions by iterating a loop was the root of the problem, and was soon fixed. Now we get correctly drawn circles and primitives! The change means we need to make an array to hold all the points before they are passed to the drawing function, which requires an extra variable in memory.

Donald's new PSU

... is installed and running very nicely indeed. Astoundingly quiet and reliable device, a BeQuiet! L8 600W. Should outlast the rest of the system by a good few years I hope.

Documentation

Started making the user manual for the AR. Will have a sheet with instructions for the 123D operator, and another one with instructions for the AR operator, which include instructions, FAQs,  general guidelines and whatnot.

Will also make documentation for how to reinstall AR - 123D on  new machines.

Tuesday, 30 July 2013

Day 37: Polishing ExportJPEGs

Added lots of exception handling to ExportJPEGs, now it will rarely crash from I/O file and directory issues, but rather print a hopefully useful error message (see below) and continue provided the problem isn't fatal.

Added a bunch of methods for handling the red label text and using it to show errors and messages when the program is running:
setLabel(String text)
setPersistentLabel(String text, int time) <- time is how long the label persists
setPersistentLabel(String text) <- time defaults to 5 seconds if not present
renderLabel()

This affects the large red text at the top of the screen, which prints useful messages and prompts when necessary. Via these methods you can set the text to anything you like, normally once it is set to something the label stays like that until something else sets it to something different. Now you can set a persistent label where it will not allow anything to change it until a certain time has passed (except another persistent label call). This is useful for e.g. printing error messages that you want to make absolutely sure will stay on the screen for a number of seconds.

Also the render label method splits long text over multiple lines and renders it. Previously if the label was too long it simply went off the end of the screen, now you can actually see the whole message.

Also added functionality to load all existing models at startup. If when the program starts up, the incoming folder has any models in it then the program will prompt the user whether to load existing models or ignore them. The user can then choose to load the existing models or ignore them. It's all done quite nicely and intuitively in the GUI. So now if there are a bunch of heads we want to show up from the get go, or if the program shuts down and has to restart, you can safely get all the heads back as long as you have the original models. On that train of thought, I also set ExportJPEGs to clean out the built heads folder on startup. So if heads had previously been built before starting the program now, they won't get loaded up again. This prevents LowryDemo from auto-loading old heads and from wasting storage space. Hmm actually I think I can improve this.. Okay we'll see how this goes.

Now making a separate background thread to check for files on the server every so often, this way when downloading files it won't freeze the GUI.

Also set the upload gif part of the program to always use unique gif names. In this way no gifs can possibly be overwritten.

The way the server file sync thing will work is that the push script on the 123D machine will have an exact representation of the server folder. So anything not in the folder in the push (outgoing) directory will be deleted off the server. In this way the 123D people have full control over the models that are on the server and can replace them. If a model is replaced, the AR/ExportJPEGs program will notice and build it again (it won't replace it's old model on the list though, once a model is built it's there for good). At the end of the day, if they want to clean up the server folder just empty the outgoing directory and wait for the script to update (which will probably be set to do so once a minute).

Importantly, the 123D people MUST NOT EDIT OR RENAME TEXTURES AFTER THEY HAVE BEEN EXPORTED from 123D. Just export and send them as is, if any texture is renamed after being exported then it won't work. You can rename models though but I strongly suggest just exporting and sending as is. It's best the 123D people export directly into the outgoing folder.


Anyway, all these details plus anything important will be packed into a little user manual shortly, for the future operators of the wondrous 123D-AR demo.

Monday, 29 July 2013

Day 36: Exceptions, LowryDemo, etc

No more Cygwin

I learnt some fantastic stuff today - I can remove the need for cygwin from the AR completely and make it a fully native and portable windows executable program. Well you may need to install some visual c++ redistributables, and I'm not sure if it will work without XNA installed... but it's pretty standalone now! I have a few required .exe's that need to go around with the program, but that's basically it.

First off a big thank you to the University of Manchester for making certain file transfer protocols unusable over wifi. If it wasn't for the fact that lftp (and ftp and rsync and everything else I painstakingly tried and tested) didn't connect over wifi I would never have even looked for alternatives and learnt all the wonderful gadgetry I learnt today.

So first off, I learnt that WinSCP not only has a synchronize feature to rival lftp mirror, it also comes as a .NET assembly and a portable executable - and it works over the University wifi to boot.

Now, the fact that I can use WinSCP from inside the ExportJPEGs program (remember the .NET assembly is available) means that I don't need to use any cygwin scp or ssh or lftp or ftp or any of that gubbins. Which also means that now the only reason I have cygwin is to use the ported image magick convert tool. After a little research I found that image magick also offer a portable version that works on Windows, which comes with a little convert.exe that, after testing, works exactly as intended. Which means I suddenly have no use for cygwin at all.

Well one thing's for sure, I've completely given up on making this program portable. That's probably a good thing because I don't fancy working out how to manually port all of Goblin's code onto Linux and Mac (XNA has been ported to Mac via MonoGame and MonoDevelop, but no one has bothered with the Goblin wrappers). So from here on out, the University of Manchester AR and 123D wonder is a completely Windows based offering.

In other news I managed to destroy LowryDemo because I messed up with using the git repository and ended up with ridicuous "<<<<<<<< HEAD" lines in random files that prevented the whole thing from loading or building. It didn't even tell me which files had been victimised, so the only solution was to just revert to my backed up copy - luckily I have a few. Then I had to redo all the content optimizations I had made, moving textures and models to a central folder, setting relative pathnames for most of the models (converting them to binary first), etc.

First automated draft complete

That's right. I finished implementing the automated loader code in LowryDemo. Now it automatically loads up every single precompiled head in the heads directory, then waits for new ones. As ExportJPEGs compiles new heads, it automatically loads and adds them to the latest free model set.

It's all working!

At one point I got totally lost because WinSCP doesn't copy files the natural way. In fact it makes a placer file, then renames the file once the copying is complete. While this is a great idea, it meant that my file watcher (which watches for newly created .fbx files) completely missed the renames until I told it to watch for renamed .fbx files as well. A simple problem but it had me lost for an hour or two in despair thinking all was broken.

Tomorrow I improve it, add exception handling, etcetera.

Friday, 26 July 2013

Day 35: Donald's PSU

Doing what I do best, troubleshooting faulty PC's

Donald wouldn't start up today. Power comes through, everything starts whirring, but nothing comes on screen. At first I thought it was the fact that I hadn't used all 9 motherboard standoffs when installing the motherboard, and now some parts were shorting out. So I removed the mobo, stole some standoffs from an old Pentium IV PC nearby (I'm sure no one is going to use it...) and reinstalled the motherboard with standoffs all in place. But it still didn't work.

I checked RAM, checked drives, checked graphics card, and noticed that the red CPU led was lighting up and staying lit. Normally it would go off when the mobo gets past the CPU test. These little led's are a wonderful feature of this Asus M5A97 motherboard, allowing the user to instantly know where the board is failing as the PC boots up. There are led's for CPU, RAM, VGA (graphics card), and boot device.

Anyhow the CPU was either broken or not getting enough power. I checked the power cables to the CPU and to the motherboard but still didn't work.

I then removed the entire motherboard and removed absolutely everything except the RAM, the CPU, and two power cables from the PSU - the 8 pin CPU power and the 24pin ATX motherboard power. Didn't work, same problem with CPU led lighting up.

I then opened Walter and used the 8pin and 24pin cables from his PSU, and boom it all worked!

So we ordered a new BeQuiet! Purepower L8 600W (only one of the best most reliable PSU's you can buy) from Scan which should be here soon.

Getting AR up on Walter

Moved the development to Walter in the meantime, luckily all the code was uploaded to our BitBucket version control system so I had the latest code ready to go.

To install the AR on a new PC it involves installing all the component software like Visual Studio, XNA, Alvar, OpenCV etc, and also Cygwin (for the shell scripting part of it).

Cygwin, installed openssh, imagemagick (both), and lftp.

Aaaaaaand it looks like lftp doesn't work over WiFi. Oooooh deary.

Welp, there goes the fantastic lftp mirror command I liked so much!

Finished off the Automated GUI and it works great! Only thing left now is to get Lowry Demo to automatically load new heads, should be a cinch! Oh and also to find a replacement for lftp mirror. It really is a shame that it doesn't work over WiFi, because it is just so ideal for what we want.

And I also need to add exception handling for when a build fails. (VERY IMPORTANT HAMZA DON'T FORGET)

LaRue

You guessed it, a new PC! It's an old Dell T3400 workstation called Atlas50 or something, which for some reason has 8GB's of RAM in it (!). It has a decent dual core CPU and an 80GB hard drive so it isn't ancient. It's a real pain trying to get drivers to install on it though, the darn thing doesn't recognise USB devices (because the chipset drivers hasn't been installed) and I can't connect to the internet to download them (because the ethernet driver hasn't been installed). Gads. Well it shall have to be done on Monday then.

Till Monday!

Thursday, 25 July 2013

Day 34: Walter arrives, MEng report, Vacation Extension and a dual-list GUI

Walter arrives

I now have two PC's on my desk! I do like having two PC's on my desk.

Reinstalled Walter with the oncampus windows 7, he's now activated correctly, finished installing everything up to date and got the AR Demo with version control all sorted out and set up. There are massive amounts of backups on Walter, on Donald, and with a full set taking up 300 or so GB on the 1TB external hard drive.

ExportJPEGs and models

So it looks like having a scrollable list of heads and allowing the user to select between them is going to be a little difficult (i.e. I have no clue how to finish the implementation of ScrollPane and it would waste too much time to try).

So as an alternative I could have one list with space for 10 head names, and then allow the user to press a button to go the next set of 10 names, and the next and so on and so forth. Two buttons in fact, to go back and forward between sets of heads. Then the user can export them as they see fit.

Implemented it, ended up having two adjacent lists on the GUI because that means I can fit 20 heads on one screen instead of just 10 which is great.

Some extremely complicated programming was involved, so I used a bit of good old abstraction to make it all easy on the brain. Basically trying to decide when the forward and back buttons should be enabled and disabled, and working out when to initialize a new list for each of the two lists, and when I can assign an initialized list to one of the visible lists and how to determine which model to load when all I have is a list index that could be from any of the two lists and all this stuff is nigh on impossible. So, I simply made a bunch of wrapper methods around the two lists that make them behave like a single, simple list.

Namely:
makeNewHeadList() - which actually makes a pair of lists but the user doesn't see that.
setHeadList(int whichListToShow) - which shows on the GUI the correct pair of lists, indexed 0, 1, 2 etc, each index referring to a pair of lists.
getModelIndex() - which takes a list selection (i.e. after a user selects an item in one of the lists on the GUI) and returns the index of the corresponding model so the program can load whichever model was clicked.
addModelToList() - takes a model name, and adds it to the latest slot in whichever of the two lists it needs to go.

(remember each list is showing 10 names of models, the user clicking on a name makes that model show up in the GUI)

Here's a little taster of what it's looking like:
ExportJPEGs running - the heads will show up in the list as they are added to the watch folder


Placement Report

Got an email from the MEng manager Nick Filer informing us that we will need to write a full-on 35 page report regarding the projects us MEng students have been doing over this summer - worth 15 credits alone! I'm so glad Toby had be blog everything, because this is just great - I have all the information I need all here in this blog and all I need to do is write it out into a report. I would have loved to do this Augmented Reality stuff as a third year project (not in any way belittling the project I actually did, a SpiNNaker demonstration, because that too was awesome), and now I can finally live the dream with a lovely little report and a seminar all about the work I've been doing.

This is quite probably the first time the prospect of writing a report hasn't been met with unadulterated despair bordering on utter doom - what a nice change :-)

The vacation experience

So, in general, this summer has been absolutely fantastic. I've enjoyed every bit of it, from the copious random tasks to the huge events to the rebuilding some ultra-extreme-cutting-edge PCs to the difficult programming to the combining of Visual C#, .Net, XNA, 123D, shell scripting and various other technologies in order to make one heck of an Augmented Reality program. So much so that Toby is actually pursuing an extension to my work placement! If that goes through it would just be great. I can spend three more weeks doing what I do best and having a great time doing it, and as a result we should see a super polished and refined Augmented Reality experience by the time this summer is done. Count on it.

Wednesday, 24 July 2013

Day 33: Results of hard labour

Stuffage

Today I did the following:
  • got the FileSystemWatcher working in ExportJPEGs (calls an event handler whenever a file is added to the folder)
  • Implemented ExportJPEGs to do everything up to the uploading of the gif, when user clicks export button
  • Implemented ExportJPEGs to automatically run lftp monitor upon starting
  • Implemented ExportJPEGs to fix model and rename regex inside fbx files when file is added to watch directory
  • Implemented ExportJPEGs to automatically build models after the files are ready
Now I basically want a system where new heads that are imported from the webserver show up in the GUI.

So the new files get built automatically and sent to the common folder (that LowryDemo and this ExportJPEGs can access).

ExportJPEGS displays a list of all head files that are ready for loading (the pre-load building is happening automatically, after files are built they are shown in this list).

The user can select a head from the list and click a button to load it, which brings it into the view so they can move it about and export it as a gif as and when required. In this way the user can switch between heads back and forth and re-make-and-upload gifs.

Update: Just found out GoblinXNA has only a half-finished version of a ScrollPane. Its basically disabled, but in the source code you can see a mostly-complete-looking implementation of a ScrollPane with the comment "IMPLEMENTATION NOT YET FINISHED" at the top.

Without a ScrollPane our list won't be able to scroll up or down, so when we have 50 or so heads in they will just overflow the view and run off the end.

Well... I've been having a crack at fixing scroll pane myself. Joy.

Tuesday, 23 July 2013

Day 32: GUIs and exporting, Animation13 web gallery

Batch Photo Processing - sed and imagemagick

Got my CS password reset and now I can successfully get into babbage.

Image magick has a tool called mogrify which is used to batch convert images, but I simply hacked together a shell script or two to get the job done with the renames and everything in tow.

This is the basis:

for infile in *; do
outfile=$(echo $infile | sed -n 's/\./.preview./p');
test -z $outfile && continue;
convert $infile -strip -density 72 -thumbnail 300x300^ -gravity center -extent 300x300 $outfile;
done

Which took all the images called img_xxxx.jpg and made previews according to the required specification. A similar command was run before this to turn the raw images into suitable main images of dpi 72 and resolution 1200 by whatever proportional y value.


And then back to automating ExportJPEGs

Coding and GUIs

So the plan for the GUIs is to have 2 sliders, one to control the vertical height of the model and the other to control its size. There will also be a checkbox to flip the head upside down (because some heads are simply upside down).

All done, and it works nicely!

Easiest to use JPEG exporter on the market!


Now I need to manage loading content into the program, so let's do a bit of thinking..

I know I can build XNB files outside of the program. I can manually press a key (or a button on the GUI) to load the newly built files in and display them in the GUI once they have been built (so the user can export JPEGs). I wonder if this can be automated..

Found that you can watch a directory for file changes using a FileSystemWatcher, details here: http://www.codeproject.com/Articles/26528/C-Application-to-Watch-a-File-or-Directory-using-F

So perhaps I can implement a system that monitors for files and displays them in a drop down list on the GUI? And when a new head model arrives the user can click the head in the drop down list, and the program will build it, load it so the user can export jpegs, and send it to the lowry demo program.

This will completely remove the need for shell scripting to manage and move files about, but we will still use shell scripts to monitor the web server and grab files, and to make and upload gifs after JPEGs have been exported.

Now the final step if I do implement this would be to tell LowryDemo to load the next head once the head is available. Again, this can presumably be done with the FileSystemWatcher - watch the content directory for new .xnb files coming in, when they do come in then load them and add to the head model set.

FileSystemWatcher... what an utterly fantastically useful tool!

The answer has been comprehended

And now I've discovered you can run bash shell scripts from Visual Studio C# applications (actually you can run them from an MS Dos batch file (.bat), and you can run such batch scripts from C# programs). So in other words I can automate the system so that all the user has to do is to run this exportJPEGs program, and the program will automatically do just about everything:
  1. run LowryDemo
  2. run the lftp monitoring client to grab new heads from the central file server
  3. fix the model and texture files, rename them etc
  4. build them
  5. have LowryDemo automatically sense newly built files, load them and add them to the AR program
  6. load the files into this ExportJPEGS program, display them in the GUI for the user to set the view etc
  7. Let the user click the export button, which will and that will not only export JPEGs but also convert them into a gif AND upload the gif to the web server.
So, now let's see if I can get all the parts of this to work together nicely.

Automation tasklist:
  • Get FileSystemWatcher working in ExportJPEGs (return event when file added)
  • Implement ExportJPEGs to do everything up to the uploading of the gif, when user clicks export button
  • Implement ExportJPEGs to automatically run lftp monitor upon starting
  • Implement ExportJPEGs to fix model and rename regex inside fbx files when file is added to watch directory
  • Implement ExportJPEGs to automatically build models after the files are ready
  • Get FileSystemWatcher working in LowryDemo, load new head after content is built, manage model sets etc

Day 31: ExportJPEGs automation

ExportJPEGs automation

The user has to perform some fine tuning of the export jpeg parameters in order for an acceptable rotating head to be exported.

This is because every single head coming from 123D Catch is widely different in terms of its size, its orientation (luckily they are only either the right way up, or upside down, so it isn't too difficult to normalize the orientation), and its vertical position and therefore the camera in the ExportJPEGs program must be set in the right place for every head, to give consistent images overall.

The best way to manage this user input is via a GUI. The program could run and display a test image of the model, the user changes parameters in the GUI and the test image updates itself. When satisfied, the user hits the "Export" button and the JPEGs are output.

So today, I made GUIs.

Need to incorporate exporting of JPEGs into Goblin XNA so that I can use Goblin's GUI wrappers and easy to use GUI features. Trying to manually work out WinForms would take too long.

Implemented the writing to alternate render target and dumping that target to a JPEG feature. The dumped images show the GUI display correctly, but do not show the rotating shape in the centre at all.

I'm using the Scene.RenderScene() method to draw the scene to the rendertarget. Normally you use Scene.Draw() but that also updates all physics etc that you want to update on a frame, in comparison RenderScene just draws the scene as is without updating anything (which is what we want, we will be drawing the scene 36 times on a single frame, each time rotate the model 10 degrees).

After consideration, it seems best to use the original Scene.Draw() anyway, because we do update the scene graph by rotating the shape. If we use RenderScene, that does not implement updates to the graph and so would draw the same shape 36 times on one draw cycle.

To use Scene.Draw we simply need 36 frames to be rendered to the back buffer and dumped off as jpegs consecutively, and in each frame the object needs to rotate 10 degrees. To implement this I'll make a simple boolean called drawingJPEGs or something, where no update to the scene graph and no input to the GUI can be made whilst it is set to true. In addition a message saying "PROCESSING" or something will appear over the GUI, so the user knows that JPEGs are being exported.

Foamalux, Animation13 Gallery

The foamalux arrived from eBay!
I think it may be a bit more flexible than we anticipated, but we'll test it once it's cut down to size. Definitely seems hard wearing enough.

I got a little side job - processing images to upload to the new Animation13 photo gallery. Rina (Web and Publications Officer) selected 60 or so from the massive bunch we received from Chris Foster the photographer (and they really are amazing photos!), and it's my job to ensure they have the correct names, and that we have large images, preview images and a thumbnail of the correct dimensions, aspect ratio and dpi suitable for web viewing. And then to upload them to our web server on babbage.cs.man.ac.uk.

Unfortunately I couldn't log into babbage - my CS password wasn't being accepted. After further inspection I noted that I couldn't log into my CS account at all - a quick email to Tim from duty office ensued, and it should be fixed tomorrow.

Friday, 19 July 2013

Day 30: Why the Windows 7 installer is horrendous

The horror that is a Windows installation

Set about reinstalling Katy and Donald with Windows 7 on-campus, Katy did fine the install went well and all was dandy. Donald now, was another matter entirely.

First, the installer would get up to the bit where you choose which drive to install Windows on, then there it would hang and tell me a required cd/dvd device driver is missing, insert installation disk to load the driver.

This issue normally happens when you need to load the AHCI or the RAID driver for the installation (it's all to do with sata drivers and recognising your hard drives and SSD's and such). Now, I've tried using the installation disk, I've tried all of the 50 or so options for AHCI and RAID drivers on that disk, and none of them actually fix the issue and let you install windows.

However the second cause of this error is when the Windows setup installation you are trying to install from (in my case a live, bootable USB drive) is internally corrupted in some way or another. In which case reformatting and remaking the bootable usb drive fixes it.

Remaking the USB drive got me past this issue, then hit another error when I tried to format the SSD to install windows - apparently windows cannot create the partition.

So anyway, I've discovered the best most foolproof way to get windows to install when you have drive problems: boot into partedmagic or some other live partition management software (e.g. gparted directly) and set up your drives manually. You want a new partition table but leave the partition you wish to install windows on unformatted. No initializing an NTFS partition or any of that jazz, you want a fresh clean disk with only an MS Dos style partition table, but no partitions initialized on it.

Only took about 10 hours of trial and error with Hoops and now Donald to work that one out!

Promotional Video for the School

Some chaps took some quick video clips of the AR stuff in action as part of a promotional video for the University. Pics below, courtesy of Toby (thanks :))
Dude on the left has the absolute latest in camera stability gadgetry

The bouncing golf balls, result of 4 gruelling hours well spent
And the annual barbecue was also roaring away in the background as I sat typing furiously away indoors. Due to fasting commitments I couldn't join the general food splurging, and my afternoon was spent distancing myself enough far enough away from the sumptuous smells wafting through just about everywhere. Having a room without a window has some benefits then!

Back to Automation

We left off with the following tasks done:
Replacing regex within .fbx files so they reference the correct textures
Building .fbx and .jpg files via the command line MSBuild into working .xnb files
Various bits of shell scripting

And this not yet done:
  1. Some more shell scripting (moving the correct files about..?)
  2. How to manage the loading of files into LowryDemo (current method of replacing .xnb and then hitting M, Z is too dangerous and user unfriendly)
  3. How to manage the gif generation (user needs to manually adjust variables on every head to center the head model so that it rotates in the middle of the view: need to adjust zoom, Y position, Y rotation of the model before running the JPEG snapping bit)

Looked into task 2 today, managing loading of files into LowryDemo. After lots of googling, debugging, googling, sitting back in frustration, debugging and googling I've finally worked it out.

Overridden the Model Loader code to customize it because I want to load the .xnb files that I built myself - not the ones that the project is building at compile time. This means specifying a custom directory and model name for the loading. And it works - this means I can load whatever I want, whenever I want whilst the program is running! The correct head loading of LowryDemo is complete, at long last.

To do now: work out the automation of ExportJPEGs and implement it into the automated workflow
use batch files in the shell script to do the msbuild stuff on the fly
finish off batch file automation.

And that should be it!

There are also some optimizations I need to do in general that I recently learnt about, mainly adding content links rather than copying content to the content folder. This should save a good gigabyte or so of wasted space in the final program! Very welcome stuff.

Random notes I made whilst working it all out (for reference):

(also links to more great resources on the Content Pipeline itself and then how to extend it)

At compile time, all assets are processed through content pipeline and stored in a temporary file (.xnb)
Content.Load<Model>("blah") retrieves the model you specify from the content pipeline, which is given as a Model object to whatever called Content.Load.

Brief overview of Content Pipeline, 4 steps on all assets before they are used in your game:
  1. Importer: reads imported data so the content pipeline can process it. Outputs the object that the next element can process...
  2. Content Processor: converts raw data from Importer into something meaningful, what we call processed data.
  3. Content Compiler: puts processed data into a (binary) compiled asset file, an .xnb
  4. Content Loader: reads in compiled asset file, turns it into an object your game can use

Summary:

Specify Content.Load to point to your own premade .xnb files.
Name must not include an extension
Must be name of the original file (sans extension) before compiling.Not so! Simply use the name of the final .xnb file (sans extension)
Corresponding textures must be in the same directory.
CANNOT USE ABSOLUTE FILE PATHS - (this one got me for a looong while, but it does make sense - moving the code to other people's computers, you don't know where stuff will end up in an absolute sense)

Day 29: Return of MSBuild

...and I worked out what the MSBuild problem was. It wasn't recognising the assembly Microsoft.Xna.Framework.Content.Pipeline.dll apparently because I was using the wrong public key token.
Success is the ability to go from one failure to another with no loss of enthusiasm.
Opened that sample code on Winforms, and noticed they were referencing the same assembly actually within the code, and then noticed that they were using a different public key token to the one I had specified. Copy pasted their token and boom! MSBuild now recognises the BuildContent task and the fbx starts building!

So new error:
MSB4044: The "BuildContent" task was not given a value for the required parameter "TargetProfile".
 Fixed by adding the parameter TargetProfile="HiDef" in my proj file.

Next error:
1.fbx : error : Cannot autodetect which importer to use for "t1.jpg". There are no importers which handle this file type. Specify the importer that handles this file type in your project.
This happens after it starts seemingly correctly importing and processing 1.fbx (the fbx file I told it to process). 1.fbx references the texture t1.jpg which is intended, and I've placed the texture in the same directory as the model so it can see it. Now I need to somehow tell MSBuild to use Microsoft.Xna.Framework.Content.Pipeline.TextureImporter.dll, as well as the TextureImporter and TextureProcessor pipelines. How on earth am I going to do this... so this is the current project file:
<Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 
    <UsingTask TaskName="BuildContent" AssemblyName="Microsoft.Xna.Framework.Content.Pipeline, Version=4.0.0.0, Culture=neutral, PublicKeyToken=842cf8be1de50553" /> 
    <PropertyGroup> 
      <XnaInstall>C:\Program Files (x86)\Microsoft XNA\XNA Game Studio\v4.0\References\Windows\x86</XnaInstall> 
    </PropertyGroup> 
    <ItemGroup> 
      <PipelineAssembly Include="$(XnaInstall)\Microsoft.Xna.Framework.Content.Pipeline.FBXImporter.dll" /> 
    </ItemGroup> 
    <ItemGroup> 
      <Content Include="1.fbx"> 
        <Importer>FbxImporter</Importer>   
        <Processor>ModelProcessor</Processor> 
      </Content> 
    </ItemGroup> 
    <Target Name="Build"> 
        <BuildContent SourceAssets="@(Content)" PipelineAssemblies="@(PipelineAssembly)" TargetPlatform="Windows" TargetProfile="HiDef" /> 
    </Target> 
  </Project>
So I want to include TextureImporter.dll in the PipelineAssemblies.

Welp, try the obvious (added line 8):
<Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 
    <UsingTask TaskName="BuildContent" AssemblyName="Microsoft.Xna.Framework.Content.Pipeline, Version=4.0.0.0, Culture=neutral, PublicKeyToken=842cf8be1de50553" /> 
    <PropertyGroup> 
      <XnaInstall>C:\Program Files (x86)\Microsoft XNA\XNA Game Studio\v4.0\References\Windows\x86</XnaInstall> 
    </PropertyGroup> 
    <ItemGroup> 
      <PipelineAssembly Include="$(XnaInstall)\Microsoft.Xna.Framework.Content.Pipeline.FBXImporter.dll" />
      <PipelineAssembly Include="$(XnaInstall)\Microsoft.Xna.Framework.Content.Pipeline.TextureImporter.dll" />
    </ItemGroup> 
    <ItemGroup> 
      <Content Include="1.fbx"> 
        <Importer>FbxImporter</Importer>   
        <Processor>ModelProcessor</Processor> 
      </Content>
    </ItemGroup> 
    <Target Name="Build"> 
        <BuildContent SourceAssets="@(Content)" PipelineAssemblies="@(PipelineAssembly)" TargetPlatform="Windows" TargetProfile="HiDef" /> 
    </Target> 
  </Project> 
...
There are moments when everything goes well; don't be frightened, it won't last. 
Which is almost exactly what we needed, except that in LowryDemo it expects the model to be named 1.xnb and not 1_0.xnb. Annoying, but we can simply rename every model that comes out of this MSBuild and it will work straight off. At least the texture is correctly named t1_0.xnb texture.

What we have so far

Right, so here's a quick lowdown:

if our incoming folder contents changed (i.e. files arrived)
get names of new files (this is hard for some reason)
(perhaps copy new files to another folder to get processed so we don't lose the originals)
want to rename the files to the next incremented integer to ensure we dont miss any heads (1.fbx, 2.fbx etc)
want to rename any new .jpg file: cut off  "_tex_0", add a "t" so 1_tex_0 .jpg becomes t1.jpg
want to enter any new .fbx file and replace the regex stuff.
build the .fbx to .xnb
copy paste into lowrydemo program folder
load next head.

The renaming can all be done using sed, as well as the regex replacement within fbx files (tested regex replacement sed command, all seems to work perfectly). The build of fbx to xnb has been sorted too now that I've enforced my will upon MSBuild.

One issue now is what should really be the "easy" bit, just the general moving files around. Getting the names of the new model and texture and then transporting them to be built by MSBuild, transporting the .xnb's to the correct folder inside the bowels of LowryDemo's content and stuff like that.

The more important issue is how to deal with loading the compiled xnb files into the AR demo (LowryDemo) program. The current placer heads and copy pasting xnb's can be automated but it's a bit fiddly so I'm looking into content pipeline extensions as well.

Another reinstall

There was an issue with activating the Windows 7 install on Katy, Donald, Walter and Hoops. In order to fix it we are going to reinstall the PC's with an On Campus version of the 64bit Windows 7 disk image provided by the University.

So yes, starting with Donald I'm going to be reinstalling absolutely everything... again. Sounds like fun! I'm so good at this now I can probably write a script that presses the correct keys at the correct seconds to completely automate the install of Windows, Visual Studio, Goblin XNA, 3DS Max, Camtasia, etc..

Wednesday, 17 July 2013

Day 28: MSBuild and building .xnb's from the command line!

Building .xnb's from the command line

So the process to make an .xnb out of an .fbx goes something like: generate an MSBuild project that invokes the content pipeline, and then run it using MSBuild.

"MSwhat??" I hear you say? Well this overview of MSBuild explains all: http://msdn.microsoft.com/en-us/library/ms171452(v=vs.90).aspx

Taken from said overview: MSBuild is basically Microsoft's universal building thing, i.e.
The Microsoft Build Engine (MSBuild) is the new build platform for Microsoft and Visual Studio. MSBuild is completely transparent with regards to how it processes and builds software, enabling developers to orchestrate and build products in build lab environments where Visual Studio is not installed
So Visual Studio uses a hosted instance of MSBuild to build it's projects. In other words it means you can build a VS project from the command line (even on a machine with no VS installation) using MSBuild, and get exactly the same results as if you had built the project from within VS itself.

Perfect for our uses.

The MSBuild project file that I need to generate will be an xml file looking something like the example on this page: http://xboxforums.create.msdn.com/forums/t/77371.aspx. It fully describes what items need to be built as well as how they need to be built with different platforms and configurations.

Key features of the project files:
  • Items: represent inputs into the build system, grouped into item collections based on their user-defined collection names. Use these collections as parameters for tasks, which use the individual items contained in the collection to perform the steps of the build process.
<ItemGroup>
    <Compile Include = "file1.cs"/>
    <Compile Include = "file2.cs"/>
</ItemGroup>
...
$(Compile) <- to reference the item later on
  • Properties: represent key/value pairs that can be used to configure builds.
<PropertyGroup>
    <BuildDir>Build</BuildDir>
</PropertyGroup>
...
$(BuildDir) <- to reference the property later on
  • Tasks: reusable units of executable code used by MSBuild projects to perform build operations. e.g. a task might compile input files or run an external tool. Once created, tasks can be shared and reused by different developers in different projects. Example below calls the MakeDir task, passes it parameter which is the value of the BuildDir property (a task parameter can be a property or item collection).
<Target Name="MakeBuildDirectory">
    <MakeDir
        Directories="$(BuildDir)" />
</Target>
  • Targets: group tasks together in a particular order and expose sections of the project file as entry points into the build process. E.g. above, the target MakeBuildDirectory contains the task MakeDir.
To build a project file, use something like:
MSBuild.exe MyProj.proj /property:Configuration=Debug


So I made myself one of them:
<Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 
    <UsingTask TaskName="BuildContent" AssemblyName="Microsoft.Xna.Framework.Content.Pipeline, Version=4.0.0.0, Culture=neutral, PublicKeyToken=6d5c3888ef60e27d" /> 
    <PropertyGroup> 
      <XnaInstall>C:\Program Files (x86)\Microsoft XNA\XNA Game Studio\v4.0\References\Windows\x86</XnaInstall> 
    </PropertyGroup> 
    <ItemGroup> 
      <PipelineAssembly Include="$(XnaInstall)\Microsoft.Xna.Framework.Content.Pipeline.FBXImporter.dll" /> 
    </ItemGroup> 
    <ItemGroup> 
      <Content Include="*.fbx"> 
        <Importer>FbxImporter</Importer>   
        <Processor>ModelProcessor</Processor> 
      </Content> 
    </ItemGroup> 
    <Target Name="Build"> 
      <BuildContent SourceAssets="@(Content)" PipelineAssemblies="@(PipelineAssembly)" TargetPlatform="Xbox360" /> 
        <BuildContent SourceAssets="@(Content)" PipelineAssemblies="@(PipelineAssembly)"  TargetPlatform="Windows" /> 
    </Target> 
  </Project> 
And I added the msbuild.exe directory to the %PATH% environment variable, opened a cmd prompt, and tried to build my new project file (with an fbx in the same directory). Nothing I try seems to get this thing to build, the error is:
C:\Users\ARdemo\Desktop\ARDemo\MSBuild Projects\process_fbx.proj(16,7): error MSB4062: The "BuildContent" task could not be loaded from the assembly Microsoft.Xna.Framework.Content.Pipeline, Version=4.0.0.0, Culture=neutral, PublicKeyToken=6d5c3888ef60e27d. Could not load file or assembly 'Microsoft.Xna.Framework.Content.Pipeline, Version=4.0.0.0, Culture=neutral, PublicKeyToken=6d5c3888ef60e27d' or one of its dependencies. The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask.
Frankly, this makes no sense. The Microsoft.Xna.Framework.Content.Pipeline.dll is in the right place, Visual Studio Express builds correctly, I've installed all of the latest service packs and updates for VS and everything else, I'm using the 32-bit and the 64-bit version of msbuild (tried both, neither of them worked), I used dependency walker to check for dll dependencies of Microsoft.Xna.Framework.Content.Pipeline.dll and installed the missing dll's, still no luck. Reinstalled XNA, .NET4.5, installed the DirectX SDK, anything else I could find.

So... the assembly "Microsoft.Xna.Framework.Content.Pipeline, Version=4.0.0.0, Culture=neutral, PublicKeyToken=6d5c3888ef60e27d". Is it parsing that correctly?

Google turns up nothing. Can't believe no one wants to build fbx files outside of Visual Studio.

So it seems there is an issue with the dll or one of its dependencies. MSBuild can't see the Microsoft.Xna.Framework.Content.Pipeline assembly at all. I've even added the directory of it to the path. I'm using 32 bit MSBuild and .NET, 32 bit XNA (tried 64 bit MSBuild but no difference still get this error). I've told MSBuild to use .NET 4.0 - I do have .NET 4.5 installed maybe that's a factor?




Possible alternative to all this jazz: http://xbox.create.msdn.com/en-US/education/catalog/sample/winforms_series_2
(A tutorial that shows you how to import arbitrary 3D models at run time by running the Content Pipeline importers and processors dynamically on the fly as opposed to building all the content ahead of time in the usual way)

Will look into it because this MSBuild stuff is getting nowhere.

Tuesday, 16 July 2013

Day 27: lftp mirror, AR materials, unable to load ALVARWrapper.dll

The issue yesterday about kiss rejecting rsync commands? All fixed using, lftp mirror as an alternative. Need to install lftp on Cygwin on both AR and 123D machine. Some quick and friendly documentation on lftp and it's awesomeness here.

The get.sh file goes on the AR machine to grab the texture/model files:
Lovely code:
#!/bin/bash
# get.sh
# mirror a remote folder, copying new files to local dir
# 
# pull files from: public_html/123/incoming
# (toby@kiss.cs.man.ac.uk)
# to: the directory this script is run from
#
# Non recursive, resumes partial files in case of interruption
# Uses dir cache for speed
#
# last edited: 16/07/2013 by Hamza Mahmud 
#
lftp -u toby,********* kiss.cs.man.ac.uk << EOF
mirror -cr --use-cache public_html/123/incoming .
quit 0
EOF

And a similar put.sh file goes on the 123D machine. There will be a folder on the 123D machine in which the 123D operator will paste all models and textures. They will initially run a script that will call this put.sh file every minute or so to update the remote dir.

Similarly on the AR machine a script will be running that calls get.sh every minute.

Automation of syncing files across AR and 123D machine is now almost done. If we recall steps 1 to 5 of the final 123D AR workflow:
  1. At the 123D Catch machine, take bunch of photos, run 123D Catch on them and export a model.
  2. run the put.ftp script to upload the model and its texture to our file store.
  3. On the AR machine go to C:\Export\toby, run get.ftp (this script pulls the 123D models from our file store somewhere)
  4. rename texture file to "t1.jpg" so now we have e.g. 1.fbx and t1.jpg
  5. Use regex to replace 4 lines in fbx
I now need to construct a script that will run on the AR machine, that will call get.sh every minute, and whenever a texture and model is grabbed will rename textures and replace the relevant lines in .fbx's. Here we go.

I should really get in touch with a shell scripting expert...

But in a fit of impending excitement I started relearning and revising some basic shell scripting. Made a few notes:
">" redirect stdout, e.g. to a file
ls -l > file

"<" means command before it take input from source after it:
grep searchterm < file

">>" append stdout to a file
date >> file

"2>" redirect stderr to a file

| redirect stdout from previous command into next command
ls -l | grep searchword | sort -r

make variables by "=", no spaces:
msg1=Hello

" " enclosing means treat everything inside as literal except certain keywords (escape char \, $ for vars)
' ' means take as literal, no exceptions.

reference variables with $, can use braces so that you can put characters immediately after it
"$msg1 World!" -> prints Hello World!
"${msg1}oooooo!" -> prints Hellooooooo!

command line arguments:
$# parameter count
$@ all parameters separated by spaces (useful for passing the params to another function or program)
$- the flags the shell was invoked with (useful for controlling program flow based on flags set)
$$ process if of the shell innovated to run the script (useful for creating temporary unique filenames
relative to this instantiation of the script

command substitution: output of the command substituted in place of the command name
$(command)
`command`

arithmetic expansion (value of expression replace the substitution)
$((expression))

if list
then list
[elif list
then list] ...
[else list]
fi

if conditions evaluated via the "test" command, evaluates to true = returns 0 (false returns anything else)
test is equivalent to enclosing in square brackets.
(MUST HAVE THE INCLUDED SPACES)
if [ "$1" = "1"]

Can integrate && and || between tests. Both LAZY (first && fails=will not evaluate second, similar for ||)

[ x -gt y ] <- greater than of the test command (before we had "=" of the test command)

while list
do list
done
"The two lists are executed repeatedly while the exit status of the first list is zero."

for variable in word ...
do list
done
"The words are expanded, and then the list is executed repeatedly with the variable set to each word in turn."

OR CAN REPLACE DO AND DONE WITH { AND } - can't do with while loop.
for variable in word
{ list
}


Functions:
name() {
commands
}
   
And came up with this simple highly amateurish framework for automated code:
#!/bin/bash
ls . > /tmp/old-list
while :
do
 echo "Welcome to the AR process script. It's now running, you can relax."
 /bin/bash get.sh # runs the lftp mirror command to grab new files
 
 ls . > /tmp/current-list
 if diff /tmp/current-list /tmp/old-list
 then
  # we have new files!
  diff /tmp/current-list /tmp/old-list | grep '^<' | while read x file 
  do \
     # do something with new files $file
     echo $file
     mv $file gotten/
     awk '$file {print $2}' > awk-log
  done
 fi
 cat /tmp/current-list > /tmp/old-list
 sleep 1
done

This gives a list of any new files that appear in the import directory at the point inside the innermost while loop (line 13). Unfortunately the commands after the do of the innermost while loop aren't being executed when this is run as a script. Not sure why as of yet.

After this failed attempt I've decided it isn't worth me devoting more time to the shell scripting until after I've finished the other task of automating content building, since the shell scripting can easily be done by an expert.

AR Materials

The little Animation13 signs we used last Friday were printed on a sturdy, rigid plastic board that Toby noticed would be perfect to use as Marker cards.

I set about researching where we could source such material, and came across sheets of foam PVC whose description and image resembled the A13 signs quite closely: http://www.brettmartin.com/en-gb/plastic-sheets/products/foam-pvc.aspx

Aka Foamalux!

Found quite a lot of suppliers of the stuff, and we've ordered some large sheets we will be guillotining to A4 and A3 size to use as AR markers - coupled with A4 size sticky labels to print the markers on and attach them to the foam PVC boards.

Inexplicable problem with ALVARWrapper.dll

Ran the LowryDemo program today and hit the age old error:
unable to load ALVARWrapper.dll the specified module cannot be foundSOLUTION:open  \src\device\vision\alvardllbridge.cs and change ALVARWrapper.dll to full file paththen rebuild GoblinXNA
Implemented (what I thought last time was) the solution, rebuild all, but annoyingly the problem remained.

Ran depends.exe on alvarwrapper.dll, copied ieshims.dll and dcomp.dll to bin/x86 and to dlls/unamanaged still no luck.

Reinstalled .net 4.5, installed Visual Studio SP1, reinstalled opencv and alvar and rebuilt the alvarwrapper dll (opening the alvarwrapper project in Visual C++ Express and rebuilding from scratch, then copying over) and installed every available update from Windows Update. Restarted PC. Copied all dlls from dlls/unmanaged into bin and bin/x86.

Works. Strange, not certain which step was the one that fixed it in the end.

Monday, 15 July 2013

Day 26: Reboot

Setting up Donald

Arrived at work today and found a lovely surprise waiting for me in the AIG lab - none other than Donald himself! Seems my workplace got a bit of an upgrade then, I copied all the AR stuff off Katy (the previous AR workhorse) and onto the new superfast machine in a jiffy, installed GoblinXNA - followed the installation guide to the letter, no problems to speak of.

A recurring issue during the installation however, is that ALVAR does not set its paths up correctly. I think this is an issue when installing on 64 bit machines (which they should all be nowadays) but is easily corrected by entering the configuration manager in Visual C++ Express: C/C++ -> general -> Include Directories, as well as Linker -> Include Directories and simply fix the paths there.

After Goblin XNA 4.1 was fully set up with .dll's from OpenCV, Alvar, Newton etc all copied properly (all part of the installation guide) I simply copy/pasted the entire project directories for our two XNA projects (LowryDemo aka the standard AR demo program, and ExportJPEGs) into their respective new homes (GoblinXNAv4.1\Projects\) and then built them.

LowryDemo builds without a hitch (after copying the relevant models over to GoblinXNAv4.1\data\models) and runs like a dream - much smoother than on old Katy actually. I noticed it in the bouncing golfball demo, which is somewhat graphically demanding with 15 balls each with shadows and moving about at high speed and everything.

ExportJPEGs also builds flawlessly. It required a coupe Both programs take a very long while to build (understandably, they need to rebuild absolutely everything from scratch)

Productivity just underwent a large boost!

Set about doing task number 1: process the backlog of photos that 123D was too slow to get through on animation day and upload those gifs!

My life for the next few weeks (automation)

So the general task I have right now involves making the entire 123D Catch to AR demo process completely automated so that any user can demo this tech via a simple keypress in future. To do this I need to consider the complete workflow of the 123D-to-AR demo, and work out how to automate every step - and of course remove steps, find better ways of doing things, make it faster and more efficient etc. Lots of fun scripting expected!

Here's an overview of how we did it at Animation13. This is what we want to automate and improve (I posted this in a previous blog post somewhere too): 
  1. At the 123D Catch machine, take bunch of photos, run 123D Catch on them and export a model.
  2. run the put.ftp script to upload the model and its texture to our file store.
  3. On the AR machine go to C:\Export\toby, run get.ftp (this script pulls the 123D models from our file store somewhere)
  4. rename texture file to "t1.jpg" so now we have e.g. 1.fbx and t1.jpg
  5. Use regex to replace 4 lines in fbx
  6. place them both in C:\Users\ARdemo\Desktop\ARdemo\GoblinXNAv4.1\projects\ExportJPEGs\ExportJPEGs\ExportJPEGsContent
  7. Open ExportJPEGs.sln project in VS C# Express (once its opened leave it open for the next model cycle), import 1.fbx, update code to load "1.fbx", run in debug mode.
  8. JPEGS are now in C:\Export, make gif by running the script "./convert.sh 1.gif" in cygwin, this automatically uploads the gif to the webserver where it displays on the rotating heads web gallery, (important that gif names don't clash!)
  9. copy 1.xnb and 1_0.xnb from C:\Users\ARdemo\Desktop\ARdemo\GoblinXNAv4.1\projects\ExportJPEGs\ExportJPEGs\ExportJPEGs\bin\x86\Debug\Content (these were generated as a byproduct of step 5)
  10. paste the xnb's you just copied, into C:\Users\ARdemo\Desktop\ARdemo\GoblinXNAv4.1\bin\Content\heads
  11. Hit the M then the Z key on LowryDemo to load the next model
On every cycle increment the model number (e.g. in steps 4, 7, 8, 9 replace the 1 with 2, 3 and so on). This is very important as LowryDemo is programmed to take models named exactly the right number, and that number is 1, 2, 3 and so on. If you misname the models you will waste a head spot.

There are some issues with automating this demo. Mainly that it can easily go wrong - and a simple restart is not possible. If you mess up you have to manually recover the heads that have already been done and restart the AR demo application yourself and do a few more steps. An educated user (at the moment that definition only refers to me) can recover the AR demo up to where the problem occurred within about a minute. Trying to automate such a recover seems like it'll be pretty hard, but we'll tackle that when we get there.

Automate Tasks 1 - 5: Running 123D Catch and getting the model to the AR machine in the correct format

For this part of the workflow some manual intervention of the user will be necessary. We will assume the user demoing the new 123D-AR demo will take pictures of the subject and copy them onto the 123D machine.

Currently there is no way to run 123D Catch from the command line. Will it be possible to tell windows to walk itself through a full 123D capture up to the exporting of the model? I highly doubt that will work because 123D is very inconsistent. It randomly crashes at random times, and on the other hand the user needs to fix up the meshes on the returned scan anyhow. For this reason I will also assume that the user will run the 123D program himself. Full instructions will be provided so that anyone with very basic computer knowledge can operate the program.

The extent of the compulsory actions of the operator now include: taking pictures, copying them to the 123D machine, running 123D Catch on the pictures, fixing the final mesh, and exporting the model correctly named (meaning the first one is 1.fbx, then 2.fbx, and so on) and sticking it in the OUTGOING directory (we'll make one).

Which means I need to automate the following: upload model&texture from OUTGOING to the filestore, download model&texture from the filestore, open fbx and replace some strings using regular expressions.

Methinks this is a job for cygwin.

Notes:

Current put.ftp script (on the 123D machine) gets everything in the current folder and sticks it in the file store (toby@kiss.cs.man.ac.uk:public_html/123/incoming).

On the AR machine, current pull.sh uses scp to grab everything in the filestore, and there is a delete.sh meant to be manually run immediately after a successful pull.sh which deletes everything in the file store (so the next pull.sh doesn't grab anything it's already previously grabbed). This is meant to be run every few minutes by the user of the AR machine.

To automate it:
Found great resource: winscp program for windows! It has a "synchronize" command (click for details) that means: "Get an exact copy of a directory transferring new and modified files only. Delete unwanted files." which is perfect for what we want. Set synchronize to upload from the 123D machine, and download on the AR machine.

An issue: winscp is obviously Windows based and won't work on Mac or Linux. Perhaps write my own script that uses the Unix ftp command and can thus be executed on any system (on Windows via cygwin), then write a wrapper to make it easier for the user to run on Windows (so they don't have to open cygwin and all that)?

On this train of thought I have come across rsync: Unix command that does exactly that.

Can't get it to work on cygwin. Running:
rsync -avzun toby@kiss.cs.man.ac.uk:public_html/123/incoming/ /cygdrive/c/Export/toby/.

gives random errors: 
bash: rsync: command not found
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: remote command not found (code 127) at /home/lapo/package/rsync-3.0.9-1/src/rsync-3.0.9/io.c(605) [Receiver=3.0.9]
No clue what is wrong. The shell definitely sees rsync and I can run simple rsync commands successfully, but not over ssh (i.e. with a remote server). It works with other servers on the domain (e.g. my username at lf032) so it looks kiss is rejecting rsync. Security issue? Researching.

Automating Tasks 6 - 10: Building .xnb files ahead of time

The automation of the compiling of .xnb files. If I can compile them from the command line then I can automate the running of the Visual Studio programs (ExportJPEGs and LowryDemo) without requiring user intervention. Otherwise a user HAS to open the .sln project, manually import a model into the content, and then do stuff with it. This way the importing is no longer necessary. Solving this problem will aid in automating tasks 6, 7, 9 and 10.


Looks like a standard "MSbuild" run will compile anything that XNA would, so I need to learn how to run an MSbuild from the command line. It needs some XML messing about to get it working, researching now.

Friday, 12 July 2013

Day 25: Animation13

It worked

Surprised I am.

Coding

I coded in space for 70 heads, 12 were taken up with heads of our people, leaving 58 for kids and others. Amazingly it actually worked out - the animated gifs came up fine (view the masterpiece here) and the heads popped up in the AR software as it ran.

The actual photographing for the 123D heads stuff was taking place at a cheetah's pace by Xin, but the 123D processing itself turned out to be the weak link in the workflow. At one point there were 50 complete photo sets, but 123D had only processed 20 of them and anyone getting captured after then would never actually see their head on the day. If we had tested the workflow at full speed beforehand with an eye for efficiency, we would have realised this and perhaps set up multiple computers uploading demos to the 123D cloud, but alas there simply wasn't the time or inclination - it was the last thing on our minds! We were far more concerned with getting the system to actually function. I still can't believe it even worked, I distinctly remember the day I emailed Toby mentioning "oh, wouldn't it be nice if we could incorporate those 123D captures of heads into the AR so the kids can play about with their friends heads...?". In all honesty I never for a moment thought such an idea could actually happen, it was a purely "wouldn't that be nice" moment. Because the amount of updating and automation and coding and improvement the AR software would have to go through to get anywhere near a stage where something like that was feasible was just unthinkable. But we did it, in the end it all just fell into place.

If today has proved anything, it's proved that you can do anything you want to, if you just put your mind to it. Positive thinking and all that gubbins.

Animation13

Animation13, what an event! There were all sorts of crazy things going on. I heard only the highest praises for Gavin's presentation talk thingamajig in the main theatre. Our Raspberry Pi devs turned up at the last minute with some hilarious little toys, the amazing iCub robot came down from Plymouth University (they had immense computers with triple graphics cards and such), the University's Flight Simulator was up in action, you could view the earth in 3D with 3D Google Earth, a 10-year-old showed off the banana piano he had developed, and then you had 123D catch, Robot wars, Raspberry Ripple (people bopping lights dotted about a giant raspberry as they light up), Kinect Music Visualizer, Tom's interactive table, 3D printer, Autodesk demoing astounding graphical jiggery and so on and so forth.

It was quite enlightening to watch the work you had been slaving on for the past few weeks finally burst forth to fruition, with kids having a great time messing about doing various things with people's heads, and the cries of "wow", "amazing" and "that's so cool" reverberating throughout the theatre.

Everything was so full of life right till the end - after it was over the removals team took absolutely everything apart in 45 minutes flat, loaded up and ready to return to Kilburn and the like. I am fairly certain everyone there today will eventually trundle into their homes tonight and just fall flat onto their couches/beds/kitchen sinks in a state of pleasurable exhaustion.



Another episode in the Saga of the Augmented Reality demo draws to a close. Join us next week, as we move on in earnest, to bigger and better things!

Thursday, 11 July 2013

Day 24: final prep for Animation13

Set up in Martin Harris, everything is good to go.

Ftp transfer and the 123D area

Fixed the sftp issue: it did work over wifi, but we just realised we could automate ssh logins and just use scp to move the files about so we did that. This was necessary because sftp was messing about with file permissions when the gifs were pushed to the live web folder to be displayed on the rotating head gallery. All works now that we use scp and changed Toby's umask about (I think 002 was what worked).

Set up the photography area, it really is fantastic with some lovely models coming out of 123D Catch now that we have Xin on the photography job with his professional photography skills - skills so professional they are rivalled only by Toby himself!

All about heads

Sought out and fixed a bunch of bugs, automated the adding of heads so that you just:

  1. change one variable (e.g. numHeads = 50)
  2. import placer files
  3. update a bit of code mapping number keys on the keyboard to new model sets only if you need them - one model set holds 10 new models, and is mapped to one number key on the keyboard
... and the code does the rest automatically. Currently the keyboard keys 1 and 2 are reserved for the inbuilt model sets (i.e. everything except for heads), and 3/4/5 will be used for three sets of 10 heads each. 30 placer head files are in place and everything is ready to go. I'll probably update it to handle 50 tomorrow (using number keys 6 and 7).

How to add a head

To do the actual addition of a head while the AR Demo is running, you need to paste correctly configured .xnb files (named 1.xnb and t1_0.xnb, or replace 1 with whatever number you're on) into the GoblinXNAv4.1/bin/Content/heads folder (I think that's the one), overwriting the existing files in there. Then you simply click inside the AR Demo window so that it has focus, and press M followed by Z. Pressing M enables a new model to be loaded, pressing Z loads the next model into the next head space in order.

The order that heads will be added when you hit an M-Z sequence is into the number 3 model set first, and you can have 10 heads in there (for the 10 marker cards - bear in mind the heads on the two bigger marker cards will be humongous!) before the next head is automatically added to the number 4 model set. Then the number 5 model set fills up and that's it - after it's full the addition of heads is permanently disabled (of course if I've extended it to take 50 heads then the number 6 and 7 model sets will also be available before it fills).

Bit more detail on adding model sets

Although we will likely only need the 30 or 50 heads programmed as it is, if we did want to add more heads here's a little more info: regarding importing correctly configured placer files for every extra head you want you need a texture and an fbx, with the fbx pointing to that specific texture (the texture has a unique name and is in the same directory). The way I make a correctly configured fbx is by copying the same original placer fbx repeatedly (you can copy any fbx but it has to point to a texture, and only one texture), opening the copied fbx in Notepad++ (it's all in ASCII) and editing the texture filepaths to point to ./t1 or ./t2 or whatever number this fbx is. So the final placer files are named 1.fbx and t1.fbx (replace 1 with whatever number you're on).

When XNA generates .xnb's out of models and textures, the .xnb files store inside them exactly where all the corresponding textures are (i.e. they point to specific .xnb texture files) so you must ensure each of the placer fbx models points to a unique texture file with no overlap of any sort (that's why copy-pasting an identical placer model file is not enough, each one must have a uniquely named texture and point to that texture).



So. It begins.

Wednesday, 10 July 2013

Day 23: 123D workflow tested in full

Animation13 Prep

As mentioned yesterday we will have a pretty amazing setup of two demonstrations going on in two separate rooms, 123D Catch with Hoops in one room and Augmented Reality running on Katy in another.

Today I updated the AR program to allow the addition of 10 models whilst running, and we tested the entire workflow with quite positive results. It works, put simply.

The 123D Catch of Anna next to three spotlights was a pretty good success, definitely our best attempt yet. Tomorrow we will be moving to the Martin Harris building and testing everything out in finality.

We are currently using cygwin ftp commands in order to move models generated by 123D Catch on Hoops to Katy, and to move gifs generated on Katy to a special folder on some webserver where they can be displayed on a webpage in real time.

Unfortunately the cygwin ftp commands are not working over wifi, and we have no idea why. Getting ethernet connections in the Martin Harris building may not prove to be easy, so we need to fix this. Toby suggested using sftp and just automating the authentication, so I'll test that out tomorrow. If that doesn't work we shall resort to a spot of manual labour - a USB stick gets models from Hoops and they are run via humans to Katy, where they are copied off and processed. Not ideal but if we have no other choice then it will have to do.

Augmented Reality code improvements

I spent the afternoon cleaning the horrible hacked together AR demo program. Removed the copy-pasted codes and used a bunch of arrays and for loops to automate different parts of the process. It is now almost at the stage where it can accept any number of heads to add to the program as it runs.

One problem is that there is no way to validate the loading of a head, to ensure the loading occurs only after I've replaced the placer files with a real head. If the load button is pressed, that's it. If the placer file for that model was not replaced then we lose the chance and that potential head can never be used until the program restarts. I used my head for the placer files, so in any case a lost head means my head will show up in its stead.

So how can we ensure the load button is not pressed before we're ready? To assist this I have set the program to require the user to press M to enable the next model to be loaded, then they must press Z to load the model. The keys Z and M are arbitrary, and chosen because they are at opposite ends of the keyboard. To load the next model M must be pressed then Z. This dual key requirement will hopefully help to avoid mistakes on my part (I'll be operating the keyboard).

Tomorrow, further prep for Animation13! On to Martin Harris we go

Tuesday, 9 July 2013

Day 22: Full 123D Catch workflow preparation stage 1

Hoops

... is the name of the new Lenovo laptop we'd been having trouble with!

Hoops was still causing problems, we couldn't install Ubuntu because the installer couldn't see the Windows 7 NTFS partition on the disk at all - and installing then would mean it would carve up the disk to its own desires, likely resulting in wiping the windows partition whether we like it or not.

Nothing we did fixed the issue. We deleted the entire disk completely and made a new Ubuntu install but now Win 7 refused to install. It told us it could not create a system partition or find an existing system partition.

We deleted the entire disk, made a new Windows friendly NTFS partition and tried again but to no avail, Win 7 just flat out did not want to install on this disk no matter what kind of wiping we subjected it to. We made a new partition tree, we made a FAT32 partition, we did all sorts of gimmickry. We eventually decided to reactivate UEFI mode, see if we can get Win 7 to install with UEFI in the mix. No luck, it got stuck at the starting setup screen as it had before. At long last we succumbed and decided to install Windows 8 from the UEFI and try and get a UEFI Ubuntu install to go with it, foregoing Win 7 entirely.

Yet out of nowhere, as we were resignedly copying the Windows 8 installer to our bootable USB drive, a rogue google link led us to a forum post wherein someone had used fixparts to fix their partitions after they had encountered problems that greatly resembled our own. With nothing to lose, We booted from the Ubuntu LiveUSB and did just that, deleted the disk and starting running every fixpart command we could find!

Well it worked, Win 7 installed successfully. We tried Windows 8 non-UEFI version because we felt like going triple boot but it didn't work - a similar error to that which Win 7 had thrown many times before (unable to install to selected partition). Instead of fighting with it we switched to the Ubuntu installer and installed that. At last everything works fine with the two most important OS's installed correctly.

123D Catch Workflow

Tested a little hack today which allows us to add models to the AR demo program as it is running, and have those models incorporated into the AR program in real time and available to interact with!

This will allow us to add 123D generated heads (generated in one room) of the children, to the AR program (running in another room) in real time, so that their friends at the AR demo can perform dastardly acts to them. Importantly, we won't have to restart the AR program or recompile it when we wish to add models.

The method of doing this is quite simply a hack, and far from ideal if this were a robust, expandable piece of software. Thankfully the only aim right now is to get this working in time for animation13 tomorrow, so the "how" of it is overlooked. Which is great, because the code I'm going to write for this is despicable by any standards of software engineering!

I'm going to expand the current AR demo, not by programming a complex and brilliant for loop that can account for any number of models at once creating all the required machinations by simply incrementing a variable; but rather by simply copying and pasting a bunch of code and rewriting the same thing 30 times with slight differences so that we have space for 30 or so new head models.

Now the hacky bit, the AR Demo will be fed a placer model in place of each of these models (because the actual head models we want don't exist yet). XNA compiles all models to .xnb format upon compilation of the program, and this is what programs use when they run. So a bunch of empty .xnb files corresponding to our placers will be made when we run AR demo, but importantly I will instruct the AR demo to not load these models until I press a button.

...which I won't do until we get the new models from 123D Catch, and compile them to .xnb, and replace the placer .xnb files with these correct .xnb heads! AR demo has no clue that the xnb's have been switched and will load the new xnb's instead when I press the magic button, resulting in a fresh new model straight from 123D Catch appearing in the AR demo program as it runs, without interruption.

There is quite a lot of preparation going on after a 123D model is made. A model comes in a model file and a texture file, and unfortunately the model file needs to be edited so that the relative file path of the texture file is somewhere that exists (123D spits out models with the most randomly ridiculous relative path to textures, you wouldn't belieeeeve it). This is done using the following regular expressions:

Find:
Filename: "[.\\a-zA-Z ()0-9~_"]*

Replace with:
Filename: ".\\2_tex_0.jpg"

(replace 2 with whatever number we are up to)

Should perform four replacements every time

The workflow after getting an fbx from 123D Catch is something like this:

  1. get fbx and its texture
  2. rename to 1.fbx and 1.jpg
  3. Use regex to fix the expected filepath of the texture, replaces 4 lines.
  4. place them in C:\Users\ARdemo\Desktop\ARdemo\GoblinXNAv4.1\projects\ExportJPEGs\ExportJPEGs\ExportJPEGsContent
  5. Open ExportJPEGs in Visual Express, import 1.fbx, edit the code so that it loads "1.fbx", run in debug mode (this makes 36 jpg's of a head and sticks them in C:\Export).
  6. convert jpg's to animated GIF in C:\Export, upload it to the webpage etc etc..
  7. take 1.xnb and 1_0.xnb from C:\Users\ARdemo\Desktop\ARdemo\GoblinXNAv4.1\projects\ExportJPEGs\ExportJPEGs\ExportJPEGs\bin\x86\Debug\Content and replace the xnb's in C:\Users\ARdemo\Desktop\ARdemo\GoblinXNAv4.1\bin\Content
  8. hit the key on LowryDemo to load that corresponding model


(then increment the 1 to 2 and repeat)

Testing of the entire workflow shall commence tomorrow at 2pm sharp.

Monday, 8 July 2013

Day 21: Lenovo and UEFI

The Story

Lenovo arrives with Win8 installed on it. We need Ubuntu and perhaps Windows 7 in order to make use of this lovely laptop for AR and for the new OpenGL stuff for next years third year (or was it second) graphics course. So Ubuntu had been installed on top of Win8, and was working OK for a short while (a few hours?) before it stopped booting (i.e. Ubuntu wouldn't boot any more)

The Lenovo was failing in Ubuntu because... actually we don't know why that was.

But at the same time it wasn't starting Windows 8 because the laptop had been set to Legacy mode (deactivating UEFI) in order to install Ubuntu. This could be fixed by re-enabling UEFI in the bios, which we did, and got back the Windows 8.

We had:
working Windows 8 but a completely dead Ubuntu in UEFI mode; and
a completely dead Windows 8 with a broken Ubuntu (recovery mode seemed to work) in Legacy mode.

Installing other stuff (Win7 and Ubuntu) in Legacy mode meant the original Windows 8 installation would be unusable unless we switched to UEFI (and switch back when we want to use Ubuntu or Windows 7). Repeated switching between UEFI and Legacy could be a cause of problems so is best avoided.

The solution: install Windows 7 and Ubuntu alongside the Win 8 installation, all in UEFI mode so that we never need legacy.

This didn't work because the Win 8 installation had monopolized the entire 1TB hard drive into a GPT partition style because of UEFI - Win 7 can't be installed on that style of HDD file system. Making new partitions and trying to format them for Win 7 proved useless. The solution is to use a blank drive that hasn't been corrupted by GPT upon which we install Win 7, or to format this entire drive (i.e. not just a partition of it) so that it is non-GPT NTFS that supports Win 7.

We chose to forget UEFI and its GPT peculiarities and problems (this was taking far too much time), switch entirely to legacy mode and install the 3 OS's one by one ourselves. Then we decided we didn't need Win 8 at all so stuck to installing Win7 and then Ubuntu. The reason we didn't do this immediately was because in order to go full on Legacy Win 7 install, we would need to format the entire hard drive meaning all of the inbuilt recovery and backup and hidden Lenovo partitions would be lost and unrecoverable.

Win 7 installed on the fully formatted hard drive perfectly, had to boot with a bootable GParted USB (PartedMagic didn't work) in order to format the drive beforehand but all worked out in the end.

Unfortunately when the Ubuntu install was attempted, apparently it could not see the NTFS Win 7 partitions on the disk (strange..) and saw it all as free space, so we still don't have Ubuntu up and running. This blasphemy shall be sorted out first thing on the 'morrow.

Katy

Upgraded Katy with the new 2TB hard drive and copied the system to the new disk (as its much faster), also installing the new RAM dimms that came from Walter. They didn't work alongside Donald's RAM (the proprietary Dell motherboard didn't like it) so I just used Walter's higher performance sticks alone.

Will get back to 123D Catch and AR work straight after the Lenovo is sorted out provided it doesn't take too long.

In the meantime two 123D trials with Anna were completed, indoors and in the courtyard. Stay tuned for more juicy details..