Monday, April 18, 2016

What Happens when Algorithms Lie ? (Bots)

This article was written for other industries, such as retail or sales organizations. However much of it’s content can be extrapolated to medicine, health and wellness and technology/

However, who watches the algorithms, or the bots ?



Most humans are conditioned to trust authority but what causes people to decide whether a computer is authoritative or not? That is, are we more or less likely to trust a piece of information when it comes from a computer versus from a person? Researchers continue to study this question but it’s one which may become increasingly important in a future of bots – is it a human? a computer? Will the phrasing of the response change based upon the bot’s degree of confidence in the answer? And to what extent will regulations play a role in this, especially in areas of legal or medical advice?
Rather than it being too early to start considering these questions, they’re exactly what came to my mind when reading a recent article about technology companies now hiring writers, poets and other professionals to try and figure out what communicating with a bot should feel like.
“Now she’s applying her creative talents toward building the personality of a different type of character — a virtual assistant, animated by artificial intelligence, that interacts with sick patients” the article says of one woman who used to write scripts in Hollywood.
And later in the article, “how human can — and should — the bot sound? Should the virtual assistant be purely functional or should it aspire to connect emotionally with the user?”
Should a shopping bot provide positive affirmation about the clothing items I have in my virtual shopping cart? “Oh you’ll look hotter in this,” the bot coos as it pushes a $150 sweater as an alternative to the $25 sweatshirt I was considering. Is that a lie? Doesn’t a salesperson at a store do the same thing? Is it better or worse when it’s done by a computer simultaneously to 10,000 customers?
Will multivariate testing of our bot future contain ethical parameters in addition to performance measurement? Techniques like priming can be used to dramatically impact behaviors. For example, asking you if you are a “good person” and having you answer in the affirmative, before I request something of you, increases the likelihood you’ll do what I want, driven by a need to live up to the identity you created for yourself. There are many choices of software for multivariate analysis.

My friend Anil Dash talks about the need for CS departments to teach ethics and I’ve always nodded but as we move towards a conversational, AI future maybe we’re about to see a step-function in the importance of teaching these concepts.

No comments:

Post a Comment