Meeting Minutes + Actions 08/12/2020
  • First and most important task is to identify all possible useful assertions from all contributors (NRC's, vendors, etc) + deduplicate them against the current list of RVF and DROOLS assertions
    • ACTION: Matt, Patrick, et al:
      • a) complete the deduping their own rules against SI rules
      • b) add any unique, useful rules to the Requirements tab in the google sheet 
    • ACTION:  Dion to continue investigating the usefulness of the code there, especially in relation to moduleDependency / refsetDescriptor validation, implementation testing, etc
    • ACTION:  SI to work out whether or not we can automate the validation of the moduleDependency refset
      • ACTION:  Dion to provide some assistance on this as Onto Server has made some headway in this respect
    • ACTION:  Once we have the complete Gap Analysis and have identified all possible new assertions to be added into the SVS:
      • First we need to go through and confirm whether or not we already have those assertions validated in some other part of the process, or perhaps as part of the MRCM checks, etc
      • Then we need to prioritise all remaining assertions to confirm when they need to be included - a) prior to Frequent Delivery, b) prior to exposing the new SVS service externally
    • ACTION:  Continue to record, as we go through the validation gap analysis, things that we'd like to validate but can't due to missing machine-readable metadata (eg) Map source terminology, etc), so that we can then feed these requirements into the other Metadata Working Group...
    • ACTION:  AAT + Kai to speak to Terance to get estimated costs of hosting the SVS as a service
    • ACTION:  In future meetings we need to spec out what a future service would look like:
      • Donation of rules
      • Whitelisting, etc
  • Implementation Tests:
    • Very tricky at Release stage, so instead to mitigate this we can:
      • identify areas of content where high risk changes have been implemented in this cycle, and flag them up for author review as part of the AP
      • simulation of upload into NRC/Vendor systems and databases to prove no issues for our immediate users
  • We also need to be able to automatically validate the actual assertions themselves (RVF + DROOLS)
    • Perhaps by creating "failure" package(s) that contain known failure content for each and every assertion, to be run before and after every assertion change/new assertion added?
    • Only issue with this would be the overhead of maintaining these failure packages, to update them every time we add/update a new assertion
    • DROOLS already does this, so just need to replicate this for the RVF (which has been started to a certain extent)
  • We need to be able to better maintain Assertion Groups:
    • Requirement for the Front End of the RAD
    • Also a requirement for the AP, as we need a flag to allow assertions to be run either against a) ALL content, or just b) CURRENT cycle changes (ie) not historical content.  This should be controllable at the Assertion group level, so that we can run either level of validation at different points (eg) CURRENT content only for Task validation, but ALL content for Project + Staging validation.
  • Future requirements:
    • Improve the RVF build and containerise process, to allow all end users to easily spin up the RVF locally and run their own packages through it
    • OR SPEAK TO DevOps to ascertain the cost of allowing external users to access the SI RVF API and use that instead.... (costs??)
      • Devops confirmed we need some discovery work to understand what the impact of each use would be. We've just had to upgrade the current servers as they were continuously running out of RAM with just one person using it, and they are currently one of our most expensive boxes. Also worth mentioning is that we do not currently have the ability to rate limit either. We would probably need to setup a managed API gateway service to do that. We have considered this for the browser in the past but just not had the time to invest figuring it out.
      • Even better, standing up a service that we make available to all to run
    • Implementation tests for final end user implementations (beyond just uploading into a database/system)