List of issues with test/survey


I’m going to use this space to document the different issues and workflow problems I’m encountering with the test/survey function. This piece of the application is fully functional as is, but there are a number of things that could be done to make it better.

  • Once test is set to active, it cannot be deactivated without a SQL query. There should be a toggle that allows the test to be returned to an editable state.
  • The test preview doesn’t truly allow you to preview the test as it is experienced. The HTML tags are visible and none of the formatting comes through.
  • You cannot edit a question’s answers. They have to be completely deleted and reentered.
  • Question jump logic should be adaptive. If I move a question up in the list, it should adjust all associated question relationships. (This may be a long shot- Survey Monkey doesn’t even do this).
  • Radio buttons select the first answer by default, so users can simply click submit to respond to all the answers, invalidating test results.
  • There is an arbitrarily low character limit for answer fields.
  • The ability to import/export tests would make sharing between libraries significantly easier (and would reduce errors)


As the app evolves, it would probably be advantageous to sort participants into specific programs based on their test score to ensure ability-appropriate content. As the community contributes content that is easily to import into the GRA, this would be really cool.


In this “Test/Survey List,” the count isn’t working properly. We have several hundred results, but it shows 0.


We could increase the functionality of test scoring if the “Test/Survey Results” allowed for the direct comparison of pre- and post-tests (instead of running reports for one after the other). Also, it would be very helpful if there was some testing validation that could happen automatically, where patterns are detected (such as a string of 4 “A” responses) and erroneous responses cause an individual’s test to be removed from the report.

In our results this year (similar to results from last year), there are some scores that indicate a participant didn’t complete all items in the test before continuing (which shouldn’t happen) or selected multiple answers to single answer questions (which also shouldn’t happen).