Technology: A Marriage of Science and Ethics

Recently, I traveled to Asia.  During my trip, I had the opportunity to visit IBM and discussthe present state of their cognitive platform Watson, along with strategic plans for its future development.  Watson’s ability to analyze and draw insight from large amounts of disparate data, and offer functionality very close to learning, is quite impressive.  During our discussion, its ability to provide answers and solutions to questions that were bias free was touted.  On this, I challenged my host on two counts.

The first issue I had was that the statement categorized all bias as being of negative value.  It’s true some bias can be misleading and hide solutions to answers, but other bias—intuition for example—can play a vital role in discovering solutions to which no continuous stacking of facts could reach.

My second objection involved the possibility of generating bias-free solutions toIBM-Watson any problem.  Many of the domains where a platform like Watson would be very handy, healthcare for example.  These are not filled with problems for which a single correct answer exists, but problems for which multiple solutions may exists each being more or less effective given the criteria selected.  And so, it would be more accurate to refer to these solutions as suggestions.  And when solutions are evaluated, they are evaluated against some system of values.  Adjusting the system of values used can change the order, or bias, the resulting suggestions.

I left thinking about how important value systems will be in the future.  I was reminded of a GoogleSelfDrivingconversation I had with a friend about the challenges of a self-driving car a few months ago.  This problem not only gets at the heart of the issues involved with cognitive computers, it also involves technology that is just around the corner.  In an article published by the Stanford Business school titled Exploring the Ethics Behind Self-Driving Cars, the author frames the issue in the following way:

You have a situation where the car might have to make a decision to sacrifice the driver to save some other people, or sacrifice one pedestrian to save some other pedestrians. And there are more subtle versions of it. Say there are two motorcyclists, one is wearing a helmet and the other isn’t. If I want to minimize deaths, I should hit the one wearing the helmet, but that just doesn’t feel right.

We don’t live in a largely deterministic world, where solutions to problems can reside neatly on the right side of an equals sign.  In the future, cognitive computing will force society to discuss, debate, and agree upon sets of values to be used in varying contexts and situations.  The advancement of “intelligent” technology, will require an ethical system to govern (bias) its actions.  The ethical system(s) used will have fear-reaching implications, and offers a great opportunity for the faithful to enter into the fray.  Understanding and being able to articulate Catholic moral theology may soon become a valuable asset in our work to have a positive impact on society as a whole.

It’s been science’s success that has uncovered its limitations; limitations we should seize as opportunities.  We should not miss this opportunity to offer a very practical application of our very holy faith.

One comment

  1. Spaniard says:

    Good insight, Hythloday. I wonder if we need something akin to Isaac Asimov’s “Three Laws of Robotics” to govern the so-called intelligent tech? (Might need more than three… 🙂 )

Leave a Reply

Your email address will not be published.