At SMR-group we work at the forefront of many different areas of AI, because we believe machines that understand data and people can help solve society's most challenging problems. And while we create our empowering technologies we are consciously aware that, with human beings at the center of our scope, we have an important responsibility. Our advanced algorithms should never be used in a way that they no longer solve but support society's problems. That is why all developments at SMR-group are guided by a set of important principles. These principles help to ensure that our technology never harms or negatively influences the life of any person that is exposed to it; that these people receive equal treatment regardless of pre-existing societal biases or other discriminating factors; that their privacy is always safeguarded and that any data we keep or own is handled with the utmost care and adequate precautions. A continuously updated governance framework ensures that we comply with these claims as well as all relevant (inter)national guidelines and regulations.
A key element in our governance framework is a strict policy used to assess all sales requests.
For every requests we infer 3 key pieces of information that, extended with the relevant Product, form the so called 4 P's:
This could be a University (Party) in Norway (Place) that wants to use FaceReader Online (Product) in a study of behavioral psychology (Purpose). For every request we evaluate the 4 P's against legal frameworks such as the GDPR and the upcoming European AI Act, (inter)national sanctions and embargoes as well as relevant human rights and export regulations. Based on this information each P is assigned a risk level (minimal, limited, high, restricted) which together determine the course of further action.
If any of the P's is restricted the request is declined.
This holds for parties and places that are on (inter)national consolidation lists or under relevant sanctions/embargoes. Also, due to the inherent risks, we assign a restricted label to applications that relate to:
Additionally, we restrict our technology from being used in any scenario where a decision with the potential to negatively affect a person's life is made directly on the output of our technology without human responsibility.
When there is plausible risk of human rights violation, or the assigned risk labels indicate high risk, the sales request will be internally reviewed by our ethical and legal teams and requires CEO approval. To err on the side of caution we consider all countries on the EU sanctions list to be high risk places as well as parties such as:
And the purposes listed in the European AI Act.
In all scenarios that are not restricted or high-risk our technology will be sold directly. In relevant situations we will however still require the insurance from our clients that they commit to appropriate privacy and transparency obligations. To this end we have an end use statement in place that automatically expires and requires evaluation of the previous period for renewal.