Blog 02: Ethics & Privacy in Affective Computing

Picking up from where I left last week, this week’s readings talk about the benefits along with the  risks and shortcomings in the field of affective computing. In the light of booming affective technologies like wearable devices that sense a person's emotional state, chatbots, personal assistants, mood recognition enabled devices there is a responsibility to develop moral and ethical frameworks to evaluate interactions. My thoughts in the previous week were justified by this week's reading that the field of affective computing is usually overviewed as an ethically tainted field in computer science. Gartner predicted the hype for effective computing in 2015 and its inflated expectations. Today as the research in this field grows there has also been a lot of confusion amongst the people for tensions and challenges in these technologies.

My learnings from this week include better understanding the implementation of affective computing and the challenges which this field brings along with its advancements. I also learned  some practices which need to be followed in order to mitigate the rising challenges in this field. For me, the main concern this field brings along is the misinterpretation of emotions. As studied in the last week, emotions are something which we haven’t been able to define as yet. This being said, how can we make computers understand a term which we humans have not been able to understand? 

The answer to this question may be a bit confusing but with the research and advancements in the field of AI and emotions we do have technologies which are capable of possessing emotions. Today, there are numerous applications of affective computing. The ones which are most encouraged and positively adopted are the frivolous applications. These applications are positive and less significant. Applications like these are known to not affect human morals in any way. Examples of such applications can be electronic postcards, tutorial bots, customer support bots, etc. Basically, these applications are developed to serve a particular purpose and solve a common problem.

The problem begins when the applications are demanded to provide human-like real interactions. Technologies today, even though being highly efficient, struggle to identify emotions from psychological signs. As we humans do not express ourselves according to an algorithm the interactions might not be relevant to us always. There is also a risk that such interactions might have a diverse effect on users' mood or moral state. This is a big issue for systems which require human evaluation. Sentiment analysis is one prominent example which could turn into something bad. 

The harms which might arise in future could be us, the humans, training computers to be slaves for us. This may also give rise to increased dependence on the technology. The day might not be far when technology persuades every human decision. These issues discussed in the readings this week were found to be very alarming in ways. In addition to this, the issue of infringement of human rights may also cause harm and needs consideration while affective systems are implemented.

As it stands the field wouldn't have progressed if there weren't any significant benefits from it. The pros which I came forward with were the increased understanding of humanity the affective study offers. Understanding emotions, we never know, what we as humans couldn’t do the computers might be able to. I remember a saying from a wise man, “Humans don't know what they what!” Well in this case if we had someone to make accurate decisions for us, we would succeed in every task we do! Another advantage we already have is humanizing technological communication. With personal assistants in almost all homes affective computation has resulted in making technology accessible and usable for a wider audience also resulting in increasing technology literacy.

With these pros and cons on the table for future development of affective computing. I also learned a number of guidelines which need to be considered to move forward in this field in an ethical way. The most important aspect here, according to me, is consent. No matter what benefits a technology offers there always needs to be a consent present for the subject. Secondly there should be healthy communication to convey the message that the emotional analysis should not be considered as the only ground for truth. Furthermore, technology should be described with transparency for creating awareness amongst the users. Lastly, providing users with enough control over their data and obtaining insightful feedback for their data.

Again while I conclude this blog a question comes to my mind. In the hassle of making computers think like us, would we some day start thinking like computers?

References:

[1] Hernandez et al – Guidelines for assessing and minimizing risks of emotion recognition applications 

[2] Cowie – The good our field can hope to do, the harm it should avoid