Today, the White House announced that they have voluntary pledges from many top AI companies, to help ensure the security and ethical use of AI tools. This is a problem that does indeed need attention, and it’s encouraging to see progress in this area. Further, compared to other emerging technologies, it appears that in AI the government is working with the industry to address issues sooner rather than later.
However, as with any nascent field, only time will tell if any of this makes a difference. It’s one thing to make a commitment, but quite another to follow through effectively. Besides, it is unclear how many of them can be policed and there are many players outside the US.
We must also be cautious about these commitments not turning into a cartel that controls power in this emerging field. This is not new. In the aftermath of the September 11 attacks, big tech companies worked with the government to improve security. While this collaboration was necessary and beneficial in many ways, it also led to a more lax view of privacy and subsequent consolidation of power within these tech companies, as noted in the book ‘The Age of Surveillance Capitalism’. We must be careful to ensure that history does not repeat itself in the realm of AI.
The companies involved in these pledges include Google, OpenAI, Meta, Amazon, Anthropic, Microsoft, and Inflection AI. This is the initial list and I won’t be surprised if other large players join sooner or later (IBM, Oracle, Salesforce, Databricks, and more).
According to the article on CNBC, the commitments include:
✅ Developing a way for consumers to identify AI-generated content, such as through watermarks.
✅ Engaging independent experts to assess the security of their tools before releasing them to the public.
✅ Sharing information on best practices and attempts to get around safeguards with other industry players, governments, and outside experts.
✅ Allowing third parties to look for and report vulnerabilities in their systems.
✅ Reporting limitations of their technology and guiding on appropriate uses of AI tools.
✅ Prioritizing research on societal risks of AI, including around discrimination and privacy.
✅ Developing AI with the goal of helping mitigate societal challenges such as climate change and disease.”
The above list is a direct quote from the CNBC article.
This is a step in a long journey in the ethics and security of AI. We must continue to monitor these developments closely and ensure that the power of AI is harnessed for the benefit of all, not just a select few.
What do you think?
Leave a Reply