• Mixer!

    Sep 7, 2011

    A few weeks ago I was working on the beta signup page for Mixer. It waas released a little while ago, but in case you hadn't heard of it yet I thought you'd like to know what I've been up to!

    Utilising CSS3, animations, and HTML5 with AJAX I was able to recreate the design made by fellow digital magicians at Conjure.

    I particularly like the orange dude/mascot!
    Mixer!
    So what is Mixer?
    Mixer is a new and exciting way to make the most of people around you. A list of 150 people in your area will be included in your Mix, which gets updated as people join in your area. You can keep people in your Mix, or swap them out for others! Who knows who you'll make the most of in your area, you decide!

    Still here?
    Why not sign up to the beta today at http://getmixer.com and get Mixing later this year!

  • Internship at Conjure

    Aug 14, 2011

    I'm very pleased to announce joining the Conjure team in their shiny new offices in the Enterprise Centre at the University of Reading for a 2 month voluntary internship as a Software Developer.

    For those who don't know, Conjure work their magic in bringing innovative applications to the mobile market. With a wide ranging portfolio for a variety of clients including Channel 4 and Wired.

    The first day we set about putting the office together from Ikea's flatpack furniture. We were quickly invaded with nerf guns from two people from MediaSift, Daniel was expecting us to give him a welcome cake - which didn't seem quite right...

    My first front end task was making some adjustments to the Conjure homepage. Next up was creating a microsite for their London Taxi Meter app which can be used to estimate the cost of a taxi trip in London. Now available on iPhone, Windows Phone 7 and their newest addition; Android. After seeing the great design idea I was initially unsure that I'd be able to recreate it.

    Using the utilities available to me I intended to make use of as few images as I could to ensure faster page loading and lower throughput use on the hosting server. Following a few tricks and heavily utilising CSS3 I was able to recreate some gradient filled 'bulging' backgrounds to certain areas, where I carefully tried to match the colours to the initial design.

    I'm happy with the overall results and it gives me a good sense of accomplishment to see it up and running!

  • Summer work 2

    Jul 29, 2011

    After the initial interface was created to facilitate the back-end development of the site I set about creating a basic database with two example articles, with which to populate the page.

    This was relatively quick for me to do due to it being similar to things I had previously done on my own website, and on projects during my University course. I then added some randomisation so that it retrieved a random article from the database, however, this isn't something that I've tested with more than 2 articles, but in theory it should easily scale.

    I then tested that it would correctly retrieve random articles form the database, and that the selections made by the participant were updated correctly in the database. I then quickly realised that due to the nature of the project, there may be duplicated keywords that are generated. As such the current implementation would have created a check box for each duplicate. So I set about not only ensuring the checkboxes continued to be alphabetically ordered, but their original order was still maintained server side to correctly update the database, whilst ensuring that this order would hopefully not be available to the participant be it hidden on the web page or by other means.

    This proved to be the most complex task as it involved both the business logic and display logic, which were together due to the site's small size and not wishing to implement an MVC framework for something with a small number of views.

    Until this was completed I was left with some blank check boxes, however the database entry would still remain correct, though it would have been clear that there was a duplicate entry, as well as possibly being confusing.

    The next issue was another visual one - the checkboxes, although tabulated, would be 'smushed' around the page and cause it to look odd. With a bit of CSS tweaking and overriding removal (seems I made a change and undid it within another style) it was looking better.

    Next up was to test it with a real article's output and determine whether the database updated correctly with Richard's analysis program. After a few tweaks the program's output was correct and we could begin the batch running of the many articles, which would continue to produce a very large SQL file over the next 40 or so hours!

    Will be posting a link once the site is out there, which will hopefully be soon.

  • Summer work

    Jul 25, 2011

    I have been fortunate enough to have been asked to work at the University of Reading to assist a student with research on their PhD throughout July in order to:
    …helping to drive forward part of an exciting research project at the cutting edge of natural language processing and interacting with linguistics.  The project is setting out to capture data via the web from human participants to validate themes produced by existing automated processes.
    Which in other words is utilising the web and social media as a promotion platform to present participants with a short article to read, and then present them with keywords or phrases which are to be selected based upon the perceived relevance of them.

    The requirements of the task were detailed as web design, and programming being non-essential (I quickly found this to be essential!).

    The technologies I decided upon for the ease of development take up were PHP and MySQL. These are both technologies I have dabbled with before, and they have the added bonus of being runnable on my local machine without hosting for a faster development and test cycle.

    I referred back to my personal website for some refreshers on CSS and database queries in PHP. Which has at last proven to not be a waste of my time.

    First of all I began with creating two templates for keyword selection. One with groups of keywords, which made it clear that there were specific groups to select from, these groups are of undetermined origin however. The PhD student's algorithm, another keyword algorithm and the possibility of using chance were options. The second template I created was that of individual check boxes for each keyword or phrase. This was the chosen presentation format to support anonymity of they keyword's origin to prevent the malicious or false data from being collected which may have arisen from this.

    I next set about writing some JavaScript to ensure repeated null entries at least required more interaction with the web page than clicking the submit button. The extra requirement of a selected radio button helped achieve this. There was discussion of using a CAPTCHA to achieve this. However, due to the frustration and time that these may require to fill in, and the ideal aim of having participants contributing to multiple article's data, it was decided that they would not be used.

    To further deal with the possibility of such results it was decided that the database entries would track certain information on the result in order to later be pruned if deemed necessary.

    Unfortunately at this moment I do not have a publicly available working version of the site to link.