How to regulate AI as legislation falls behind

By Antony Mutunga

The demand to be ahead in the Artificial Intelligence (AI) technology space has seen the technology advance quite fast in a span of a few years. But despite the advantages that AI brings to the world, it is better to be smart in terms of privacy and security. 

A number of tech players have agreed to regulate their development in artificial technology. Google, Microsoft, OpenAI, Meta, Amazon, Inflection and Anthropic have agreed to new safeguards for the fast-moving technology that will ensure safe, secure and transparent development of the technology. How do they do that without losing their customers? 

The companies have become signatories of a document dubbed “ensuring safe, secure and trustworthy AI” thanks to the U.S government. As the masses enjoy the new technology, generative AI, in different forms including ChatGPT and AutoGenAi, just to mention a few, challenges still abound. 

According to Tim Mwadime, a software engineer and entrepreneur, AI will have positive impacts on the world, but the dangers it presents cannot be ignored. 

“It will definitely create new opportunities such as AI engineers and also be key in detecting misinformation. However, AI will lead to the loss of jobs such as customer service agents and it can also be used in tricking people and spreading disinformation as well,” Mwadime says. 

The fact that the technology is advancing faster than countries are able to come up with laws to regulate it simply means that there is little to smile about. Under the US backed document for example, some tech companies have agreed to establish internal and external threat detection measures and programs. 

In order to certify the latest technology, players in the tech space will allow third party testing in terms of competitions or events with prizes that would be offered to gurus who try and (finally) spot dangerous flaws. With security testing options extended to experts, it will be easier to guard against risks such as cybersecurity.

Some tech companies have also agreed to put watermarks on any audio or visual AI content from any of their publicly available tool so that it is easy to identify. This will make it easier for users to distinguish between real content and ‘deepfakes’, which are mainly

A deepfake is generally a photo, audio, or video that has been manipulated by Machine Learning (ML) and AI with an intention to make it appear to be something that it is not. If it is a video, for example, it is one that have not been ‘reworked’ by video editing software.

So, watermarks will be crucial as deepfakes have increased all over the internet, with even social media sites like Facebook and Twitter currently being used to spread false

Furthermore, companies have committed to not only share information with each other, but with the government as well. As a result, they will publicly report in the case of a flaw or risk in the technology. The techies will also oversee the advancement and adoption of shared standards and best practices towards the safety of AI technology. 

Additionally, the tech giants will have to invest and support research and development initiatives that are crucial to overcoming shared global challenges such combating hackers and climate change.

All these regulation interventions are currently for the people in the US as a number of tech companies are yet to embrace them. It is also interesting to note that as the debate continues to hold, small players in the tech industry are concerned about the gains as they feel the tech giants, at the end of it all, may have an advantage when it comes to sharing information.

However, it is only a matter of time before other countries follow suit and agree on similar terms. There is need to slow down the advancement of AI. To continue on the path where AI is ahead of regulations, it will come to a point where it will be impossible to counter the negative impacts that many are currently
afraid of.  

Sign Up