Meeting Minutes + Actions 17/11/2020
- First and most important task is to identify all possible useful assertions from all contributors (NRC's, vendors, etc) + deduplicate them against the current list of RVF and DROOLS assertions
- Meeting ACTIONS 17/11/2020:
- ACTION: AAT to create a google sheet containing the current list of RVF and DROOLS assertions, and share with all working group members so that they can:
- a) use that to dedupe their own rules against
- b) add any unique, useful rules to the Requirements tab in that google sheet
- ACTION: Everyone to:
- a) dedupe their own rules against SI rules
- b) add any unique, useful rules to the Requirements tab in the google sheet
- Alejandro completed - no significant gaps!
- Matt 2/3 way through - some very minor gaps
- Jeremy has no capacity to run full analysis, but can't foresee any significant gaps
- Patrick has been away, will start now
- ACTION: AAT to create a google sheet containing the current list of RVF and DROOLS assertions, and share with all working group members so that they can:
- Meeting ACTIONS 17/11/2020:
- Implementation Tests:
- Very tricky at Release stage, so instead to mitigate this we can:
- identify areas of content where high risk changes have been implemented in this cycle, and flag them up for author review as part of the AP
- simulation of upload into NRC/Vendor systems and databases to prove no issues for our immediate users
- Very tricky at Release stage, so instead to mitigate this we can:
- We also need to be able to automatically validate the actual assertions themselves (RVF + DROOLS)
- Perhaps by creating "failure" package(s) that contain known failure content for each and every assertion, to be run before and after every assertion change/new assertion added?
- Only issue with this would be the overhead of maintaining these failure packages, to update them every time we add/update a new assertion
- DROOLS already does this, so just need to replicate this for the RVF (which has been started to a certain extent)
- We need to be able to better maintain Assertion Groups:
- Requirement for the Front End of the RAD
- Also a requirement for the AP, as we need a flag to allow assertions to be run either against a) ALL content, or just b) CURRENT cycle changes (ie) not historical content. This should be controllable at the Assertion group level, so that we can run either level of validation at different points (eg) CURRENT content only for Task validation, but ALL content for Project + Staging validation.
- Future requirements:
- Improve the RVF build and containerise process, to allow all end users to easily spin up the RVF locally and run their own packages through it
- OR SPEAK TO DevOps to ascertain the cost of allowing external users to access the SI RVF API and use that instead.... (costs??)
- Devops confirmed we need some discovery work to understand what the impact of each use would be. We've just had to upgrade the current servers as they were continuously running out of RAM with just one person using it, and they are currently one of our most expensive boxes. Also worth mentioning is that we do not currently have the ability to rate limit either. We would probably need to setup a managed API gateway service to do that. We have considered this for the browser in the past but just not had the time to invest figuring it out.
- Even better, standing up a service that we make available to all to run
- Implementation tests for final end user implementations (beyond just uploading into a database/system)