The Centralisation Problem
This year, Decentralised AI (dAI) has captured the industry spotlight. The resignation of Stability AI CEO Emad Mostaque highlighted this concern. His departure reflects a growing consensus that AI development should fall under the stewardship of a diverse global community.
This followed a series of controversies in the governance of generative AI: deepfakes, the OpenAI boardroom fiasco, Gemini AI's data bias, and the Getty Images lawsuit against Stability AI.
The problem with centralised control over AI creation
The centralisation problem presents an overbearing barrier to AI innovation. Under the status quo, the worldβs largest corporations hold sway over the trajectory of AI development based on their own objectives, which do not necessarily align with the public interest.
The danger of AI being controlled by centralised corporations is that their biases and values are amplified on a global scale. They decide who gains access to the models, and their value alignment often downgrades the performance of models.
We consequently see low public participation, less access to computing power, amplified data bias and inaccuracies from less and lower quality training data, and a missed opportunity for AI to realise its maximal potential as a force for good.
There is a pressing need for an equitable distribution of rewards for those who help to refine these models.
Last updated