Summer work 2
After the initial interface was created to facilitate the back-end development of the site I set about creating a basic database with two example articles, with which to populate the page.
This was relatively quick for me to do due to it being similar to things I had previously done on my own website, and on projects during my University course. I then added some randomisation so that it retrieved a random article from the database, however, this isn't something that I've tested with more than 2 articles, but in theory it should easily scale.
I then tested that it would correctly retrieve random articles form the database, and that the selections made by the participant were updated correctly in the database. I then quickly realised that due to the nature of the project, there may be duplicated keywords that are generated. As such the current implementation would have created a check box for each duplicate. So I set about not only ensuring the checkboxes continued to be alphabetically ordered, but their original order was still maintained server side to correctly update the database, whilst ensuring that this order would hopefully not be available to the participant be it hidden on the web page or by other means.
This proved to be the most complex task as it involved both the business logic and display logic, which were together due to the site's small size and not wishing to implement an MVC framework for something with a small number of views.
Until this was completed I was left with some blank check boxes, however the database entry would still remain correct, though it would have been clear that there was a duplicate entry, as well as possibly being confusing.
The next issue was another visual one - the checkboxes, although tabulated, would be 'smushed' around the page and cause it to look odd. With a bit of CSS tweaking and overriding removal (seems I made a change and undid it within another style) it was looking better.
Next up was to test it with a real article's output and determine whether the database updated correctly with Richard's analysis program. After a few tweaks the program's output was correct and we could begin the batch running of the many articles, which would continue to produce a very large SQL file over the next 40 or so hours!
Will be posting a link once the site is out there, which will hopefully be soon.
This was relatively quick for me to do due to it being similar to things I had previously done on my own website, and on projects during my University course. I then added some randomisation so that it retrieved a random article from the database, however, this isn't something that I've tested with more than 2 articles, but in theory it should easily scale.
I then tested that it would correctly retrieve random articles form the database, and that the selections made by the participant were updated correctly in the database. I then quickly realised that due to the nature of the project, there may be duplicated keywords that are generated. As such the current implementation would have created a check box for each duplicate. So I set about not only ensuring the checkboxes continued to be alphabetically ordered, but their original order was still maintained server side to correctly update the database, whilst ensuring that this order would hopefully not be available to the participant be it hidden on the web page or by other means.
This proved to be the most complex task as it involved both the business logic and display logic, which were together due to the site's small size and not wishing to implement an MVC framework for something with a small number of views.
Until this was completed I was left with some blank check boxes, however the database entry would still remain correct, though it would have been clear that there was a duplicate entry, as well as possibly being confusing.
The next issue was another visual one - the checkboxes, although tabulated, would be 'smushed' around the page and cause it to look odd. With a bit of CSS tweaking and overriding removal (seems I made a change and undid it within another style) it was looking better.
Next up was to test it with a real article's output and determine whether the database updated correctly with Richard's analysis program. After a few tweaks the program's output was correct and we could begin the batch running of the many articles, which would continue to produce a very large SQL file over the next 40 or so hours!
Will be posting a link once the site is out there, which will hopefully be soon.