Understand why this detection method is called "You Only Look Once"
The challenges of AI regulation
If we have to define limits or some regulation on artificial intelligence, how would this be implemented?
In times as unique as these in the midst of a pandemic, our digital life has accelerated exponentially, with everyone adopting digital tools like never before, allowing us to survive, even though absent from life in society as we knew it. In this context, the artificial intelligence tools that have emerged have proved their value and usefulness, even though we still have a blurred vision about their real impact at this moment.
In view of this context, several authorities and countries are beginning to question the best time to start analyzes for the regulation of possible uses for artificial intelligence. Should we have to define limits or some regulation, how would this be implemented?
It is necessary to recognize, first of all, that there is no way to create a single regulation for artificial intelligence, given its fluidity, since applications using AI can be created for the most diverse purposes. If a regulation of some level is necessary for artificial intelligence applications, what would be the correct regulation after all, that does not harm its evolution but allows the protection of the rights of those who use it?
Discussions on the topic generally start from initiatives to map the uses and applications that use artificial intelligence to then define their risks and identify the cases in which something went wrong. Currently, most critical cases focus on the use of facial recognition tools and their limits, the lack of minimum rules for trying new applications that use artificial intelligence and the fact that, at the end of the day, we are all guinea pigs in more diverse projects, regardless of whether we are in more or less developed regions of the planet.
In this scenario of rapid development and innovation, what are the ethical limits that we, as a society, should implement to avoid the impression that a dystopian future awaits us soon, frightening us with every news about new exotic forms that demonstrate the power of intelligence artificial? Most experts now seem to agree that military, public health, finance and public security applications should have specific limitations on the use of artificial intelligence, given their enormous potential to affect millions of people substantially.
Does this mean that any regulatory initiative must address these industries vertically, defining specific rules for each one? Not so much. It is increasingly clear that some level of vertical regulation is indeed necessary, given the unique characteristics of these industries. It is already clear that some basic rules such as the right of transparency on decisions involving financial credit rules, when taken solely by an algorithm, must rely on human review when necessary, as well as the prohibition of automatic sights by automated cameras in armaments systems. , considering the various errors that an armed conflict scenario would have. It is a consensus among experts that such weapons systems should not use artificial intelligence for the time being.
In the health industry, on the other hand, the biggest problems stem from the criticality of information collected by private companies and how such data will be used in the future, a paradox frequently commented on by specialists in privacy regulation since the proper functioning of artificial intelligence tools requires large databases. data with a myriad of information about its users. Balancing convenience with security and privacy is one of the great challenges for any company that wants to stand out in this market, especially if we consider that the data collected, if well stored, will not decline or lose its validity anytime soon. Thus, with the usual and passive collection of health data through various digital services, the best regulation to ensure certain rights to users at the moment seems to be the European GDPR and, here on our side of the Atlantic, the Brazilian LGPD. Exemplarily, Professor Yuval Harari highlighted the risks and degree of criticality of these health data in the first two minutes of this video.
Finally, in the face of this scenario, it is worth noting the fact that several technology giants have stopped the commercialization of facial recognition tools to public security forces, due to the recent protests against structural racism in the United States and the rest of the planet. . Anyway, there are several other tools that use artificial intelligence in controversial ways, as this New York Times article has demonstrated.
It is too early to have a pessimistic view of technology and its uses, but perhaps the time has come for horizontal regulation to protect global citizens in such a complex and increasingly digital era. The future of the 21st century will require us to make decisions in the coming years.
About the author
I2AI makes up the Advisory Board of the largest Artificial Intelligence Innovation Network in the country
Twelve business institutions come together to define the strategy and guidelines for the performance of the innovation ecosystem. There is R $ 70 million available for the business sector to innovate with disruptive technologies
Why you need to know what this new and disruptive milestone in AI means for humanity
The secrets of Artificial Intelligence projects
Every day companies of all sizes and segments face the challenge of implementing their first project using Artificial Intelligence. However, this fascinating technology with infinite possibilities brings some requirements for