In the final, tenth, episode of our “Human 2040” series, entitled “I Communicate”, we look at how we will communicate with each other. Will we still need a smartphone to communicate with a voice assistant in 2040? How will technology help read our emotions and anticipate our needs? And will we be able to distinguish information from fake news on our own in the era of dynamic development of artificial intelligence? In the latest part of the series, Polityka Insight analysts will take a closer look at trends such as the development of communication without active human involvement or the growing role of intelligent sensors. In the podcast dedicated to communication, hosted by Andrzej Bobiński, managing director of Polityka Insight, the guest is Mariusz Chochołek, president of T-Systems Polska, who is also responsible for the largest business client market in T-Mobile Polska.
CIRI INTENDS TO LAUNCH A STATE-OF-THE-ART VIRTUAL ASSISTANT THAT WILL BE DIRECTLY CONNECTED TO THE USER'S BRAIN
The company's decision sparked tremendous enthusiasm on the New York Stock Exchange.
Ciri — the voice assistant has been accompanying us in meetings for many years and is doing more and more extensive notes, including writing summaries, completing calendars, making appointments or buying the products we need. Since the beginning of the 30s, the app, previously used mainly for business, has been incorporated as a permanent feature of the operating system of most mobile devices and has become one of the most widely used digital tools of the last decade.
Yesterday the app informed users about the plan to implement a direct connection to the user's brain. Ciri will randomly select a representative group of 10,000 people who will be offered participation in the test phase of the project. If the pilotage is successful, functionality will be made available in the short term to all recipients as part of the upcoming major annual system update. The New York Stock Exchange reacted very enthusiastically to this unexpected information. The share price of Ciri increased by 121% and the run on the shares resulted in an increase of AI500 (+63%), Cyber100 (+24%) and the entire stock exchange of almost 5%.
The ‘Thought is the new voice’ project is likely to represent a breakthrough in the development of a new generation of virtual assistants. For years, many companies have worked on an interface that enables people's wishes and needs to be exploited. However, most of the attempts made so far have been broken down into so-called ‘thought interferences.’ Devices collecting plans and orders were not able to suppress subconsciousness and created an uncontrolled and endless inventory of commands and activities. The fact that Ciri, seen as one of the most traditional and conservative technology companies, decided to publish plans for the implementation of a direct interface shows that a technological breakthrough had to take place. What exactly was this breakthrough? It is unknown because the details of the pilotage are kept strictly secret by the company. We will learn more on 1 January when the selected users will start the tests.
The plan for the roll out of the direct interface was also enthusiastically received by the cybersecurity industry. Experts comment that the decision to prepare for the commercialization of the most recent generation of Ciri must mark a breakthrough in its security features. Ciri has had an issue for many years involving endless hacker attacks. It is perceived as a peculiar Holy Grail for the majority of cybercriminals. The upcoming update must mean, according to CyberUSofA, a technological breakthrough also in the field of cybersecurity. The company would not decide to develop such sensitive technology without 100% confidence that it is able to protect its product from external interference.
AI-EMOTION WILL OBTAIN THE APPROVAL OF POLISH SOCIETY FOR DIGITAL AND MENTAL HEALTH
A system that assesses our interactions with our environment and tailors the content we receive to our needs can be used in the treatment of loneliness and depression.
AI-Emotion is a system of sensors used to track emotions. Users choose which of their own parameters they wish to monitor: biological (e.g. pressure measurement), physical (e.g. weight), environmental (e.g. air temperature), behavioural (e.g. sleep length), but mainly mixed parameters (e.g. psychophysical). The sensors pick up a full record of the external world (experiences) of an individual. The sensations of all senses and all reactions to external and internal stimuli are recorded, so we know exactly how we feel and why. The use of these technologies in business is obvious: companies can see how their products are perceived in the real world. The system is used to supply algorithms that help us to choose what we desire, what we want to eat, view, listen to or by doing which physical activity we will feel better.
Despite the initial controversy, the ‘quantified self’ movement has won over a host of enthusiasts and has entered our lives permanently. Simple algorithms from the 20s reproduced our choices and urged us to make them deeper. With technological progress, the programs have started to diversify our choices by looking for alternatives on the basis of our other preferences and increasing datasets. However, building a business range based on digital footprint raised objections and concerns about breaching the existing anonymization guidelines.
The AI-Emotion era has begun. This system has enabled us to better follow our emotions and reactions. And due to anonymized data sets, MDPs (mega data packs) and IUDTs (individual unplugged data trusts) have made it possible to create secure offerings that actually exceed our needs and expectations.
Quantified selfers have long been sought in other areas of life. Tests were launched to combine experience data based on emotional states with mental health. Today, it seems that these applications may be numerous. AI-Emotion will facilitate the work of therapists who are able not only to talk about recollections and opinions, but also to rely on data analysis and discuss recorded key recollections. The use of VR tools to anticipate the follow-up of responses in different situations is being considered. The initial resistance of Polish society to Digital and Mental Health has been broken and work is about to start on new applications and scripts to be used by psychotherapists in their daily work with patients.
THE EUROPEAN COMMISSION UNVEILS PLAN FOR DEEP FAKE DIRECTIVE
However, success has been moderate. As part of the implementation of the Directive, Member States will decide on the obligations of platforms to combat deep fakes.
The Commission and WoSoM.org (World Association of Social Platforms) have come to an agreement in which Brussels will prepare a directive governing responsibility for deep fake hosting. Platforms will need to inform all users (logging on from EU servers) whether a post has been ‘deep faked.’ This is a considerable success for DG DigCom since it has been ineffectively trying for many years to identify the platforms as publishers and not just as hosts of the content published. It was also possible to rebut the second line of defence of platforms, which claimed they were unable to take responsibility for determining whether videos posted had been manipulated.
However, the success of the Commission is only partial. It has not been possible to establish a single protocol of the so-called ‘takedown rights and obligations’ for the EU as a whole. As part of the implementation of EU rules, each country will be able to establish procedures for handling material identified as deep fakes. Individual countries are in a weaker negotiating position than the Community and, with small exceptions, will rather be forced to ignore the guidelines of the EC, which calls for automatic blocking of content. According to experts, only Sweden, France and Scotland will decide to do so.
Poland is among the group of countries likely to temporarily block ‘content of uncertain origin’ — users will have to prove that a video may be socially harmful to block it. These countries are also unlikely to choose to block content from so-called verified publishers. In turn, countries such as the Netherlands and Ireland will confine themselves to informing users that the content is deep fake. These more digitally liberal nations are afraid of lawsuits for taking down content that may be considered as artistic or political expression. Blocking them could then be seen as an infringement of the principles of freedom of expression.