Your data is valuable, and you want to make the most of it.
Perhaps you're creating a digital twin, or looking to create new opportunities for your customers. Maybe you want to make your data more accessible and combine different data sources to one single source of truth; you might even want to infer new data from your existing data sets.
But before you can achieve any of these goals... do you know how "good" your data is? And what does "good" even mean to you?
For your data, "good quality" might mean reducing duplication, ensuring spatial accuracy, enforcing detailed attribution – or all of them!
Once you've decided on your definition of "good", you need a way to measure this for your data.
Our rules-based-approach makes managing your data quality simple, flexible and transparent.
A rule tells us if something is true (valid) or false (not valid). It is easy to trace, as it is always represented by a precise logical condition.
Rules are a quick and simple way of defining quality checks for example:
Rules can be easily adjusted to reflect what quality means to you in any given scenario, evolving with your data.
You can change one rule at a time, note the impact and then adjust it as necessary. Changing one value in a rule can drastically affect your results – we’ll see this in our demonstration later on.
Consider the following data, representing the city of Philadelphia.
Currently we can see six Fire Stations within the city.
We should ask: