Responsible AI

Responsible AI

Our Six Guiding Principles

Recent years have shown numerous AI breakthroughs and require us to be conscious of the benefits and risks of this technology. We believe that companies working in AI have a responsibility to help explain the technology and think about how potential adverse outcomes can be prevented. To safeguard this the SMR-group uses six responsible AI principles to guide us in the responsible direction.



We design technology that works for people not against them

We are always in close contact with the users of our products by means of structured surveys or interviews to assess their wishes (e.g. researchers that use FaceReader; police investigators that use DataDetective). We also contribute to societal relevant projects such as We Are Data, which helps participants become aware of the kind of data technology can gather from them.



Our algorithms are fair with minimal bias

People with a darker skin color are underrepresented in many datasets, which often leads to lower classification performance. We try to mitigate this by balancing the composition of our training data and continuously testing our software against benchmark datasets (e.g. Gender Shades). This way we strive to have the same high level of accuracy for different groups of people.


Privacy Friendly

Privacy protection is embedded in the design of our technology

We incorporate a Privacy by Design approach in our products and we always respect participants’ privacy in the datasets we use. For example, in our research tool FaceReader Online, our clients sign a processing agreement and the participants are also required to give informed consent before participating. Additionally, we have the option not to store video recordings, but only anonymous metadata.



Our algorithms and motives are overtly explainable

Our algorithms are not a black box and we have tools and documentation in place to explain our results to our users. For example, many steps in DataDetective are logged and relevant clusters in the data can be inspected and clarified by similar cases. In addition, as an R&D company, we frequently publish about our new technology, providing transparency into the functioning of our algorithms.



We safeguard all data entrusted with us

We have security protocols in place to ensure our stakeholders’ data is safe with us. For example, all communications with our database servers (e.g. DataDetective, FaceReader Online) happen through an encrypted connection.



We design technology according to up-to-date standards

Our technology does well in quality assessment tests (e.g. Stockli et al, 2017). We spend a lot of effort in updating our algorithms and label new developments that are not completely validated yet as experimental.