Internship at ConjureI'm very pleased to announce joining the Conjure team in their shiny new offices in the Enterprise Centre at the University of Reading for a 2 month voluntary internship as a Software Developer.
For those who don't know, Conjure work their magic in bringing innovative applications to the mobile market. With a wide ranging portfolio for a variety of clients including Channel 4 and Wired.
The first day we set about putting the office together from Ikea's flatpack furniture. We were quickly invaded with nerf guns from two people from MediaSift, Daniel was expecting us to give him a welcome cake - which didn't seem quite right...
My first front end task was making some adjustments to the Conjure homepage. Next up was creating a microsite for their London Taxi Meter app which can be used to estimate the cost of a taxi trip in London. Now available on iPhone, Windows Phone 7 and their newest addition; Android. After seeing the great design idea I was initially unsure that I'd be able to recreate it.
Using the utilities available to me I intended to make use of as few images as I could to ensure faster page loading and lower throughput use on the hosting server. Following a few tricks and heavily utilising CSS3 I was able to recreate some gradient filled 'bulging' backgrounds to certain areas, where I carefully tried to match the colours to the initial design.
I'm happy with the overall results and it gives me a good sense of accomplishment to see it up and running!
Summer work 2After the initial interface was created to facilitate the back-end development of the site I set about creating a basic database with two example articles, with which to populate the page.
This was relatively quick for me to do due to it being similar to things I had previously done on my own website, and on projects during my University course. I then added some randomisation so that it retrieved a random article from the database, however, this isn't something that I've tested with more than 2 articles, but in theory it should easily scale.
I then tested that it would correctly retrieve random articles form the database, and that the selections made by the participant were updated correctly in the database. I then quickly realised that due to the nature of the project, there may be duplicated keywords that are generated. As such the current implementation would have created a check box for each duplicate. So I set about not only ensuring the checkboxes continued to be alphabetically ordered, but their original order was still maintained server side to correctly update the database, whilst ensuring that this order would hopefully not be available to the participant be it hidden on the web page or by other means.
This proved to be the most complex task as it involved both the business logic and display logic, which were together due to the site's small size and not wishing to implement an MVC framework for something with a small number of views.
Until this was completed I was left with some blank check boxes, however the database entry would still remain correct, though it would have been clear that there was a duplicate entry, as well as possibly being confusing.
The next issue was another visual one - the checkboxes, although tabulated, would be 'smushed' around the page and cause it to look odd. With a bit of CSS tweaking and overriding removal (seems I made a change and undid it within another style) it was looking better.
Next up was to test it with a real article's output and determine whether the database updated correctly with Richard's analysis program. After a few tweaks the program's output was correct and we could begin the batch running of the many articles, which would continue to produce a very large SQL file over the next 40 or so hours!
Will be posting a link once the site is out there, which will hopefully be soon.
Summer workI have been fortunate enough to have been asked to work at the University of Reading to assist a student with research on their PhD throughout July in order to:
…helping to drive forward part of an exciting research project at the cutting edge of natural language processing and interacting with linguistics. The project is setting out to capture data via the web from human participants to validate themes produced by existing automated processes.Which in other words is utilising the web and social media as a promotion platform to present participants with a short article to read, and then present them with keywords or phrases which are to be selected based upon the perceived relevance of them.
The requirements of the task were detailed as web design, and programming being non-essential (I quickly found this to be essential!).
The technologies I decided upon for the ease of development take up were PHP and MySQL. These are both technologies I have dabbled with before, and they have the added bonus of being runnable on my local machine without hosting for a faster development and test cycle.
I referred back to my personal website for some refreshers on CSS and database queries in PHP. Which has at last proven to not be a waste of my time.
First of all I began with creating two templates for keyword selection. One with groups of keywords, which made it clear that there were specific groups to select from, these groups are of undetermined origin however. The PhD student's algorithm, another keyword algorithm and the possibility of using chance were options. The second template I created was that of individual check boxes for each keyword or phrase. This was the chosen presentation format to support anonymity of they keyword's origin to prevent the malicious or false data from being collected which may have arisen from this.
To further deal with the possibility of such results it was decided that the database entries would track certain information on the result in order to later be pruned if deemed necessary.
Unfortunately at this moment I do not have a publicly available working version of the site to link.
My Revision TechniqueLike many students I find it hard to revise. In fact, despite being boring, procrastination seems to be the name of the game we play. Exams, for me, mean that the act of learning additional stuff as recreation is not productive to the topics that exams are on. I find that even just reading a book pushes, what is sometimes more interesting, information into my head. This doesn't help clarify revision I undertake, in fact it seems to cloud all the definitions, acronyms and relations I've made with my every day life following the advice from Nathan Ghan's audio book: How on earth do I get a first? By getting: D.R.U.N.K: Define, Relate, and Use New Knowledge.Using this method I go through module notes and slides and find all of the keywords, acronyms and concepts. List them, then write short definitions for each, where possible. I then try to think of an example of it's every day use in a wider context than in the lecture. After I have done this, if I can summarise the keyword with one other word not directly related to the subject, but with a strong enough link that I can reach the keyword without too much trouble.One of my lecturers has spoken about the human ability to retain information and facts over a 3 day period. In their experimental psychology they discovered that the mind forgets 40% of information learnt over a 24 hour period. As such revision weeks in advance of exams, although useful, would possibly be complete to the desired standard for sitting an exam. The duration of revision is also a factor for knowledge retention. After a 90 minute period of revision the ability to retain information drops to 10% - according to a lecturer I have.The act of compressing notes in this manner helps me to retain the information, not for longer, but with greater ease. Next year I intend to do this from the off. No more leaving it until exams are approaching. Hopefully it will be knowledge which I retain for the majority of my working life, and in this field I hope that it stays relevant despite the ever advancing state of technology.