Advanced data ana­lyt­ics allows researchers to rec­og­nize pat­terns, pre­dict out­comes, assign prob­a­bil­i­ties. Robots equipped with arti­fi­cial intel­li­gence can read human expres­sions and feel­ings, diag­nose dis­ease, sim­u­late a per­son­al response, rec­om­mend reme­dies or actions.

What ram­i­fi­ca­tions does this greater trans­paren­cy have for the future of human beings? Does a greater abil­i­ty to pre­dict indi­vid­ual fates trans­late into a more pre­dictable future for all? How much choice do we have in shap­ing the impact of big data and AI on pri­va­cy and pub­lic dis­course, on emo­tions and behav­ior? Will tech­nol­o­gy deter­mine us, or can and must we deter­mine its appli­ca­tions and effects?

ACADEMIA SUPERIOR invit­ed three renowned, out-of-the-box thinkers to explore these issues. Stan­ford Asso­ciate Pro­fes­sor Michal Kosin­s­ki had a for­ma­tive impact on the field of psy­cho­met­rics with his ground­break­ing research ten years ago on the pre­dic­tive capac­i­ty of data ana­lyt­ics. Today he is a vocal crit­ic of the risks he helped expose, yet remains an opti­mist about the poten­tial of new tech­nolo­gies to improve lives.

Nadia Mag­ne­nat Thal­mann shares that opti­mism. The founder and head of MIRAL­ab at the Uni­ver­si­ty of Gene­va and Direc­tor of the Insti­tute for Media Inno­va­tion at Nanyang Tech­no­log­i­cal Uni­ver­si­ty in Sin­ga­pore was an ear­ly pio­neer in the field of com­put­er ani­ma­tion. She says she sees her­self as an artist and told us Rodin’s work helped inspire her to cre­ate the humanoid robot Nadine in her own image.

Author, jour­nal­ist and for­mer politi­cian Susanne Gaschke is a scep­tic when it comes to the impact of new tech­nolo­gies on the way we read, learn, and com­mu­ni­cate. Klick, her man­i­festo against “dig­i­tal dumb­ing down”, raised eye­brows when it was pub­lished ten years ago yet appears pre­scient in hind­sight. She admit­ted to us that her elec­tion slo­gan “ana­logue instead of dig­i­tal” was dif­fi­cult to car­ry out in prac­tice, but made a per­sua­sive case for reflect­ing on when and how to set limits.

Dis­cus­sion amongst the experts and the rep­re­sen­ta­tives of ACADEMIA SUPERIOR was nuanced and wide-rang­ing. Explor­ing where new tech­nolo­gies are tak­ing us, Kosin­s­ki and Gaschke were forth­right in describ­ing draw­backs such as loss of pri­va­cy or polar­iza­tion of opin­ion. Kosin­s­ki went so far as to declare that pri­va­cy was dead and argued that while we might mourn its loss, the trans­paren­cy we gain could pro­vide coun­ter­vail­ing ben­e­fits – for a young stu­dent in a poor school dis­trict where an algo­rithm could pro­vide bet­ter per­son­al­ized instruc­tion than over­worked teach­ers could man­age, or by open­ing a world of search engine options to a res­i­dent of a coun­try with an auto­crat­ic regime.

“PEOPLE FEEL AFFECTION IN THE PRESENCE OF NEW TECHNOLOGY.”

Mag­ne­nat Thal­mann also shared exam­ples of pos­i­tive ben­e­fits social robots could pro­vide: as diag­nos­tic assis­tants in search­ing for pat­terns that could indi­cate a con­di­tion such as bipo­lar­i­ty or demen­tia, or as com­pan­ions for the elder­ly. She argued that humanoid robots offer oth­er­wise iso­lat­ed old peo­ple a “sense of pres­ence, less lone­li­ness” and said if it were a choice between hold­ing a ted­dy bear in hand or enjoy­ing the com­pa­ny of a Nadine, her clear pref­er­ence would be for the lat­ter: “It’s a ques­tion of dignity.”

Dis­cussing lim­its of and on pre­dic­tive tech­nolo­gies con­sumed a good part of the exchange through­out the day. Regard­ing inher­ent lim­its to capac­i­ty, Kosin­s­ki was adamant “that the per­fect algo­rithm, the per­fect pre­dic­tion of the future, does not exist. You always are going to have some bias.” But he insist­ed that it is hard­er to root out bias in police offi­cers, judges or cus­toms offi­cials – the human coun­ter­parts to algo­rithms in the fields of law or bor­der pro­tec­tion – than it will be to devel­op tech­niques for rec­og­niz­ing and bal­anc­ing hid­den bias in technology.

“ALL LINEAR PREDICTIONS ARE SUSPECT TO ME.”

Like­wise, Mag­ne­nat Thal­mann declared that even as AI devel­ops new capa­bil­i­ties, robots will remain machines, capa­ble of sim­u­lat­ing but under no cir­cum­stances actu­al­ly feel­ing emo­tions such as empa­thy or warmth. Point­ing out that she is inti­mate­ly acquaint­ed with the inner work­ings, the wafers and cables, of a humanoid such as Nadine, she said inter­ac­tion with a social robot will remain qual­i­ta­tive­ly dif­fer­ent from that with anoth­er human: “Humans are so much more com­plex and the inter­ac­tion we have with humans is so much rich­er, it’s no comparison.”

Yet it was clear that our use of pre­dic­tive tech­nolo­gies is affect­ing us as humans: Even if Nadine her­self can­not feel, Mag­ne­nat Thal­mann seems to expe­ri­ence some­thing like affec­tion when inter­act­ing with her cre­ation. Gaschke assert­ed that over­re­liance on online teach­ing meth­ods could under­mine stu­dents’ abil­i­ty to think crit­i­cal­ly and cre­ative­ly, and said her news­pa­pers’ online read­ers tend­ed to com­mu­ni­cate more aggres­sive­ly and polemically.

All three guests want to see pol­i­cy­mak­ers take action to set stan­dards and lim­its for pre­dic­tive tech­nol­o­gy. In Gaschke’s words: “I don’t see that any tech­nol­o­gy can be exempt from being a sub­ject of polit­i­cal deci­sion-mak­ing.” How to reg­u­late was less clear, and here clear dif­fer­ences of opin­ion emerged.

Gaschke expressed her hope that pay­ment mod­els will emerge that allow for stronger data pro­tec­tion, while Kosin­s­ki con­demned such mod­els as rein­forc­ing social inequal­i­ty and ush­er­ing in a two-class sys­tem. He did, how­ev­er, sup­port rules com­pelling tech giants to share data with star­tups. Mag­ne­nat Thal­mann described sci­en­tists’ inter­est in insert­ing eth­i­cal rules and legal stan­dards into the soft­ware with which future AI is pro­grammed, say­ing the prob­lem is not tech­nol­o­gy but humans, and declar­ing in words that res­onat­ed with most of those in the room: “It’s up to us to decide what to do with our tools.”

VITA

Dr. Melin­da Crane mod­er­ates high-lev­el pan­el dis­cus­sions and con­fer­ences for numer­ous orga­ni­za­tions and com­pa­nies and gives lec­tures on var­i­ous transat­lantic top­ics. She is a fre­quent pan­el guest and com­men­ta­tor on Ger­man tele­vi­sion and radio and reg­u­lar­ly ana­lyzes US pol­i­cy for the news chan­nel n‑tv.

As an expe­ri­enced TV anchor and chief polit­i­cal cor­re­spon­dent in the Eng­lish pro­gram of Deutsche Welle TV, she also com­ments on Ger­man and Euro­pean pol­i­tics and mod­er­ates the inter­na­tion­al talk show “Quadri­ga”. In 2014 she received the Steuben-Schurz Media Prize for her con­tri­bu­tion to transat­lantic understanding.

Melin­da Crane stud­ied his­to­ry and polit­i­cal sci­ence at Brown Uni­ver­si­ty and law at Har­vard, and earned her doc­tor­ate in polit­i­cal econ­o­my of devel­op­ment assis­tance at the Fletch­er School of Law and Diplomacy.

As an inter­na­tion­al con­sul­tant for the dis­cus­sion pro­gram “Sabine Chris­tiansen”, Crane has pro­duced inter­views with Kofi Annan, Bill Clin­ton, Hillary Clin­ton and George Bush, among oth­ers. She has also writ­ten for the “New York Times Mag­a­zine”, the “Chris­t­ian Sci­ence Mon­i­tor”, the “Boston Globe”, and the “Frank­furter Hefte”.