Responsible AI

Six Guiding Principles

In recent years we have seen many AI breakthroughs: self driving cars, intelligent assistants, AI beating Go, etc. There is a lot that AI has to offer us.
Unfortunately, more negative examples have also appeared, such as mass video surveillance, deep fakes and election manipulation. There are many examples in the media that spike fear of the possibilities of AI. These fears are partly caused by a lack of knowledge of these AI technologies and partly by actual dangers AI poses. We believe that companies working in AI have a responsibility to help explain the technology and think about how potential adverse outcomes can be prevented. We have formulated six responsible AI principles that we use to guide us in the responsible direction.

We aim for responsible AI that is:
person

Human-Centric

We design technology that works for people, not against them

We are always in close contact with the users of our products by means of structured surveys or interviews to assess their wishes (e.g. researchers that use FaceReader; police investigators that use DataDetective). We also contribute to societal relevant projects such as We Are Data, which helps participants become aware of the kind of data technology can gather from them.

public

Fair

Our algorithms are fair with minimal bias

People with a darker skin color are underrepresented in many datasets, which often leads to lower classification performance. We try to mitigate this by balancing the composition of our training data and continuously testing our software against benchmark datasets (e.g. Gender Shades). This way we strive to have the same high level of accuracy for different groups of people.

fingerprint

Privacy Friendly

Privacy protection is embedded in the design of our technology

We incorporate a Privacy by Design approach in our products and we always respect participants' privacy in the datasets we use. For example, in our research tool FaceReader Online, our clients sign a processing agreement and participants are required to give informed consent before participating. Additionally, we have the option not to store video recordings, but only anonymous metadata.

invert_colors

Transparent

Our algorithms and motives are overtly explainable

Our algorithms are not a black box and we have tools and documentation in place to explain our results to our users. For example, many steps in DataDetective are logged and relevant clusters in the data can be inspected and clarified by similar cases. In addition, as an R&D company, we frequently publish about our new technology, providing transparency into the functioning of our algorithms.

lock

Secure

We safeguard all data entrusted with us

We have security protocols in place to make sure our stakeholders' data is safe with us. For example, all communications with our database servers (e.g. DataDetective, FaceReader Online) happen through an encrypted connection.

card_membership

Competent

We design technology according to up-to-date standards

Our technology does well in quality assessment tests (e.g. Stockli et al, 2017). We spend a lot of effort in updating our algorithms and label new developments that are not completely validated yet as experimental.

phone +31 (0) 20 5300330
location_on Singel 160, 1015 AH Amsterdam, The Netherlands