Physiognomy is often described as the pseudo-science of inferring character from facial and physical characteristics.
What is fascinating about it is the idea that as we age, we bring with us the signs of who we are. Our wrinkles hide the story of our predominant facial expressions through the decades. Our posture says a lot about our employment and status. Our mouth shape and mandibular position are significant of how we wanted to portray ourselves through time, menacing or friendly for example.
There are however also concerning aspects to this pseudo-science, and in particular as our ability to process Big Data enhances, physiognomy is going to play with the edge between good and bad science: the more we store images and videos of faces, expressions, and human interactions, the more opportunities A.I. will have to examine training material and learn prejudice and discrimination.
It would be uplifting to many to think that western societies will be treating the matter in accordance with their set of values, but alas that doesn’t seem to be entirely the case at this stage, and that consolation alone would hardly be enough anyway.
A few key facts to help providing some prospective:
- judges in USA are already getting A.I. assistance to rule on prisoners, and while it is not fed with data that could discriminate on gender or race, it does discriminate on age basis. This is just one particularly alarming example through many.
- it is important to keep in mind that our values are less universal than we may be brought to think, they are in fact only one of many multiversal principles as I went on to explain before. That is to say, while discrimination is an ethical taboo for us, other cultures are more open to it.
- expanding on the above, it is also important to remember that values do change with time. For example in the first half of 20th century concentration camps were widely adopted as a viable solution to “cleanse” society from the threat of unwanted minorities… if you think of that as a German abomination, think again: USA and Australia may seem above suspicion, but they did adopt concentration and internment camps, together with a very wide list of other countries.
Even leaving on a side the cases above where we may actively accept discrimination, the core to deep learning and A.I. lies in having a large database clean from “spurious correlations” from which to learn, and this is very tricky to source even if we were to act in good faith.
In that respect the risk of having a bad set of images/videos to learn from is that -for example- an US based algorithm would “successfully” infer that the colour of the skin is an indicator of criminal attitude.
To explain the phenomenon we need to remember the statistics mantra that says “correlation is not causation”.
This one is the true key to the topic: however we want to spin this subject around, the one major implication is not restricted to the harm that may come from combining physiognomy and A.I., but is a much wider warning about how we should always make sure that A.I. will not confuse correlation with causation. An extremely complex task.
The line between correlation and causation is in fact extremely blurred in many fields, leave alone those where prejudice plays a role.
Consider the following example:
Fact: there is proven statistic correlation between use of specific psychiatric medications and suicidal tendencies.
Inference: depending on how you would like to see this correlation, it can be used to “prove” that:
a) higher doses are necessary, or
b) the drug caused the self harming behaviour.
Statistical studies are riddled with such scenarios where a scientist’s conclusion needs validation from the scientific community.
In general this is what draws the line between what we call an “exact science” (eg: mathematics, where 2 + 2 = 4 unequivocally), and inexact science (eg: medicine, where a drug has unpredictable different side effects for different users).
As we follow the topic, you may have noticed that is growing into a huge tree of ethical and scientific implications that are -get this- correlated with physiognomy, but not Caused by physiognomy…
that is to say: none of these criticisms are unique to physiognomy, nor can they pinpointed to this particular practice, but they are shared with any A.I. and deep learning technique that automatically profiles and labels humans.
In light of what we discussed so far, it becomes obvious that the main flaw of physiognomy is not that of posing a particular or unique threat, but it’s rather the fact that it only uses a fractional subset of the available information: only static facial and physical characteristics, instead of integrating them with all others more dynamic ones like choice of vocabulary, tone of voice, posture, body language, clothing, etc.
In fact, what seems more likely to happen is that anyone wanting to use physiognomy will do so bundling the information with other more or less discriminatory techniques in order to make decisions… the court example mentioned above is only one of many possible applications for physiognomy, as we saw today the aspect of the person is not a playing factor, but age is one, as well as many other “indicators” for what are considered risk factors.
Let’s explore another example: we are now recording a job interview and have A.I. determine whether the candidate was fit for purpose.
A properly trained algorithm would match physiognomy as well as dynamic characteristics to try and infer as many as possible of the following rankings:
- matching with the success rates of similar past candidates who got the job
- matching with their future coworkers for culture-fit ranking,
- intelligence ranking or even rankings for the specific type of intelligence required for the job
- attitude towards hard working
- how solid is their health
- …
Many of these rankings Will get pretty accurate with an appropriately trained A.I. .
If you are thinking that is technically very efficient and effective for the task of getting the best employee for a job, that’s probably because it is.
If you are also thinking it seems cringeworthy and inhuman, that’s probably also for the same reason.
The faults of this approach are multifaceted, and begin with opposing determinism with self determination… i.e. the idea that things are predetermined (someone’s face, or tone of voice can only be marginally tweaked) rather than being under our individual ability to self determine who we want to be.
This is extremely important to bear in mind as the success rates of this technology and its ratings could induce us to overlook the fact that overusing it will open the doors to a matrix-like world where we sacrifice happiness for efficiency.
It is once again the bee and the beehive at work: in recent years we are slowly reworking our balance between what is good for the bee and what is good for the beehive, shifting slowly towards the second. Deploying this technology on a large scale would bring definite benefits to the beehive by allocating each and every bee in their perfect spot within the productive cycle. The flaw being: we are not bees, and we have stronger impulse to self determination than most animals.
This approach just won’t keep into account the individual aims and desires of each one of us. In adopting new technologies to new scenarios we always need to choose where we want to position our society.
In this particular case we are choosing our position along the line that runs between extreme efficiency, order, and discipline on one side and self determination, individualism and happiness on the other.
Leave a Reply