How fast is technology accelerating? How much are we actually being watched over? Can we truly have control over our privacy ?
These are the prominent questions, fostering the very talked-about debate related to the AI and its ethical entailments, which has become more and more first-line within these last few months.
Nowadays, Artificial Intelligence is oftentimes being presented as the most startling frontier in technology. In 2019, the World Economic Forum defined it as “the engine that drives the Fourth Industrial Revolution”.
Furthermore, someone perceives it as the “paramount support base for a sustainable development of mankind”.
On the other hand though, it has also been accused not only of depriving everyone of their privacy, but also of being the cause to the much vaster issue concerning all the jobs, which are being taken over by robots and computers.
So, before any mendacious assumption, let us attempt to clarify what does the term “Artificial Intelligence” actually stand for.
Although there are various opinions upon this same matter, all modern dictionaries agree on whether AI being a sub-field of computer science or how machines can imitate human intelligence. For instance, The English Oxford Living Dictionary defines it as: “ The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”
The truth is that, quite unexpectedly, AI actually has an outstanding hoary history.
It was first conceived as a discipline, back to the 1956, in New Hampshire, as a ten-person team was asked to create, during a convention, an engine capable of simulating every single facet of the human intelligence and its learning ability.
AI has come a long way since that far 1956 and it has entailed numerous revolutions not only to our daily life, but also to research centres and companies’ modus operandi.
Throughout these last years de facto, it has been able to gain several implications of former relevance within primary fields, starting from the economical one, up to the social, environmental & healthcare ones.
Nevertheless, at current rates, the relationship between man & machine, between humanity & artifice, results to be suffocated by a complex system of rules and hurdles, highly arduous to manage, comprehend and analyse.
We found ourselves in the middle of the so-called “third wave” of the industrial transformation. The origin of the “man-machine system” lies within Henry Ford’s standardised procedures, which will peter into automated in the ‘90s.
As time goes by, that relationship is deepening further and further, resulting in a tighter interplay between the two.
Within the healthcare system, AI has enabled us to benefit from automated learning systems, which can more accurately identify and earlier diagnose diverse pathologies related to the patient.
The Artificial Intelligence is also spreading within the juridical field via the so-called Predictive Justice systems, which are systems capable of predicting the Judge’s orientation upon aparticular court case, by the employment of certain algorithms.
Furthermore, this technology enables automobile engineers to both develop and create more efficient vehicles.
AI assimilates through direct experiences: it is able in fact, thanks to a technique called machine-learning, to auto-educate itself through the comprehension of a mistake.
Issues, debates, hurdles arise as soon as we become conscious of this peculiar mechanism of its.
AI utilises that infinite flux of big data, which everyone produces second by second, as an instrument to both instruct itself, and more thoroughly comprehend the surrounding reality. A reality which currently incorporates man/woman, nature & machine.
In 2015, Alexander Koran sought permission from Facebook to download some of its users’ profile datas for academic purposes. A few years later, in 2019, it turned out that those datas had been used within Donald Trump’s political campaign, by way of Cambridge Analytica.
Besides, had it happened to you to google “Gorilla” in 2015, you would have surely chanced upon many pictures of black people, since Google’s image recognition system automatically catalogued them as the aforementioned animal. Same fate was suffered by the research of “cooking men”, who were classified as “feminine gender”.
After blatant events as the aforementioned, it became clearer and clearer that under no circumstances could AI not be inserted within an ethical dimension, not just as a mere idea, but instead as a full-fledged necessity.
It is becoming more and more compulsory for everyone to be able to administer and to precisely outline the subtle threshold between the amount of information AI can obtain from us in order for it to burgeon, and what becomes actual violation of our privacy.
Today, there is an out-and-out ethics of AI, which studies the economical, social & cultural consequences of its development.
As a consequence, many institutions begin to mobilise themselves, by pursuing their will to partake in the greater debate as former actors.
In 2018 Google established an “AI ethics board” within its company, whereas in 2019 several US cities forbid Face ID in public spaces.
In the last few years, Google, Microsoft & Salesforce have been firmly
fighting against the employment of their own AI in the migrant hunting and have therefore begun to invest in the privacy protection.
In 2015 Elon Musk founded OpenAl, a non-profit research organisation aiming at the promotion and development of a “friendly AI”, so that everyone could benefit from that.
Nevertheless, the initiatives promoted by the various institutions have resulted to be deficient throughout the years, as the news hailing from China demonstrates: in April 2019 in fact, in less than a month, more than half a million people were scanned, in order for the government to keep track of the Uyghurs.
In January 2020, EU extols the new year claiming that it has come the time to take serious actions, such as working for a safer and more “ethically correct” AI, which could safeguard every citizen’s privacy. It also proposed the ban for Face ID recognition within public spaces for a maximum of five years, except for security/research purposes.
UE formally conveys to all its member states that it is mandatory that every developer and user of AI have specific duties.
In UE’s opinion, an ethical AI “ought to be capable of acting by monitoring the humanity, it has to be solid and safe, guarantees confidentiality & an excellent data governance, it must be transparent, it has to promote diversity […]”
During our technological wave , the moment has come for us to raise awareness and realise that each step forward entails responsibilities and it also requires an ethically regulated application.
It is vital to be able to contain dangerous implications, respecting human rights; however, we must not block innovation, we cannot allow this process to halt, since it has been, and it still is, a peculiar phenomenon of our reality.
Translation by Lorenzo Tarchi