The terms arti­fi­cial intel­li­gence, machine learn­ing and deep learn­ing are often used inter­change­ably, which does injus­tice to the devel­op­ment in the field. AI is already over 60 years old. Ini­tial­ly, it usu­al­ly meant machine learn­ing (expe­ri­ence learn­ing from exam­ples) and expert sys­tems (for­mal pro­cess­ing of expert knowl­edge). “Neur­al net­works” try to mod­el the func­tion of the human brain and can be used with­out pri­or abstrac­tion or rules, e.g. they rec­og­nize com­plex pat­terns. One prob­lem is their “black box” behav­ior: The results are not always replic­a­ble. “Deep Learn­ing” is a new form of neur­al net­work that, thanks to much improved algo­rithms (and more pow­er­ful com­put­ers) achieves results that are cur­rent­ly not met by oth­er methods.


Making artificial intelligence comprehensible

One of the key points for the accep­tance and respon­si­ble use of AI is to under­stand, grasp, and, to a cer­tain extent, pre­dict but of course also trust machine deci­sions. The accu­ra­cy of algo­rithms increas­es with the amount of data – the more data, the more pre­cise the result. No human being is able to gath­er as much infor­ma­tion in the brain as a com­put­er. That’s why, in the long term, com­put­ers will win in eval­u­at­ing large amounts of information.

The ques­tion is what part arti­fi­cial intel­li­gence will play in people’s lives. On the one hand, many tasks will be tak­en away from humans, such as the con­trol of machines, the com­pi­la­tion of find­ings in med­i­cine, the inde­pen­dent man­age­ment of bank accounts, etc. On the oth­er hand, data analy­sis enables exten­sive mon­i­tor­ing and pre­cise knowl­edge of the indi­vid­ual. This might lead to a reduc­tion of people’s free­dom of will.

Privacy is a recent phenomenon

Pri­va­cy is an impor­tant devel­op­ment of the last cen­turies. Today’s notions of pri­va­cy devel­oped along with human­ism, lib­er­al­ism, indi­vid­u­al­ism and the bour­geoisie. The impor­tance of pri­va­cy is linked to the val­ue of a person’s auton­o­my and per­son­al­i­ty and, to a great extent, deter­mines being human. Peo­ple should there­fore fight for the val­ue of pri­va­cy and not give it up reck­less­ly. With­out pri­va­cy we are much more eas­i­ly manip­u­lat­ed and remote­ly con­trolled. To guar­an­tee the free­dom and inde­pen­dence of human beings in the con­cept of the free will is not a reli­able way, accord­ing to the lat­est find­ings of brain research.

Ulti­mate­ly, it will be the deci­sive ques­tion whether mankind suc­ceeds in using the new dig­i­tal meth­ods and instru­ments for its own pos­i­tive devel­op­ment, or whether humans will evolve into hybrid man-machine beings.


Social robots as a chance and a challenge

Even though we say social robots have no emo­tions, they can sim­u­late emo­tions, and peo­ple will per­ceive them as emo­tions in robots. If some­one pre­tends friend­li­ness, you also accept it. It is in fact more about the sen­sa­tion on the part of humans than about robots. The devel­op­ment of social robots will be very impor­tant due to the large increase of old­er peo­ple in the high­ly devel­oped coun­tries. These robots can make a very pos­i­tive con­tri­bu­tion to nurs­ing and com­mu­ni­ca­tion. How­ev­er, when social robots are used in areas where they are tak­ing away the job oppor­tu­ni­ties from work­ing peo­ple, it becomes more prob­lem­at­ic. The eco­nom­ic over­val­u­a­tion of robots would degrade man in his human­i­ty. We have to pre­vent that.

Digital technologies are changing people

Humans are for the most part anal­o­gous beings. Our learn­ing process­es in the social as well as the intel­lec­tu­al realm are anal­o­gous, as is our com­mu­ni­ca­tion, for exam­ple, with lan­guage or hand­writ­ing. Accord­ing to some experts, the dom­i­nant use of dig­i­tal tech­niques in all our areas of life leads to a change in and, in most cas­es, to a reduc­tion of cre­ativ­i­ty, empa­thy and our most basic human qual­i­ties. But these are pre­cise­ly the char­ac­ter­is­tics that lead to inno­va­tion and make tech­no­log­i­cal progress pos­si­ble in the first place. Find­ing the right com­bi­na­tion here becomes chal­leng­ing and is also a ques­tion of strategy.

China has a long-term strategy

In the com­pe­ti­tion for dig­i­tal progress among world regions, Chi­na is catch­ing up. The big dif­fer­ence between Europe and Chi­na has always been that Chi­na has long-term strate­gies. That’s a big advan­tage in tech­no­log­i­cal devel­op­ment. Maybe this is some­thing we can learn in the sense of the “Pre­dic­tive Futures”: What future posi­tion do we want to take in the world on these top­ics and how can we achieve that? Europe should define this ques­tion for itself.

Questions for the future

With regard to the impor­tant chal­lenges that human­i­ty faces, AI could make a deci­sive con­tri­bu­tion to the solu­tion. Will it be pos­si­ble, for exam­ple with the help of arti­fi­cial intel­li­gence, to under­stand the cli­mate even bet­ter and thus take the nec­es­sary mea­sures to main­tain a live­able ecosys­tem on earth? Can AI help us solve more glob­al issues, such as the dis­tri­b­u­tion of water and food with­out war and through negotiation?

The poten­tial expect­ed of AI is great; whether it will be ful­filled is still open to ques­tion. This leaves us with a strong demand for fur­ther research and insight.