Should AI development and deployment be governed through democratic means? If so, how exactly?

Positions compiled by: Analysts at The Society LibraryLast Updated: Mon May 20 2024
This is a demo Paper on the topic of governance over AI. This represents only a few stages of mapping, and currently no links are available. Please excuse the work in progress. To unpack the collection, click on words and the magnifying glass icon next to words.
Position: AI development and deployment should be governed through democratic means, with regulations formulated by elected representatives and public referendums because of economic, societal, safety, ethical, political, philosophical, legal, human rights-related, democratic, other governance-related, game theoretic, and AI alignment and safety reasons.
Position: AI development and deployment should be governed in a decentralized and democratic way, with a focus on community-based decision-making and local governance because of economic, societal, safety, ethical, political, philosophical, legal, human rights-related, democratic, other governance-related, game theoretic, and AI alignment and safety reasons.
Position: AI governance should primarily be the responsibility of international organizations, ensuring that democratic processes at a global level guide the development and deployment of AI technology because of economic, societal, safety, ethical, political, philosophical, legal, human rights-related, democratic, other governance-related, game theoretic, and AI alignment and safety reasons.
Position: Private industry experts should lead governance in concert with input from democratic public forums, balancing technical expertise with public opinion because of economic, societal, safety, ethical, political, philosophical, legal, human rights-related, democratic, other governance-related, game theoretic, and AI alignment and safety reasons.
Position: AI development should not be subject to democratic governance; instead, the market forces should naturally guide the development and deployment of AI technologies because of economic, safety, ethical, political, philosophical, legal, other governance-related, game theoretic, and AI alignment and safety reasons.
Position: AI governance should be based on a mixed approach that includes democratic processes, expert panels, and multi-stakeholder committees, ensuring diverse input and expertise because of economic, societal, safety, ethical, political, philosophical, legal, human rights-related, democratic, other governance-related, and AI alignment and safety reasons.
Position: There should be minimal government intervention in AI development and deployment, with democratic processes only being used to establish broad ethical guidelines because of economic, societal, safety, ethical, political, philosophical, legal, human rights-related, democratic, other governance-related, and AI alignment and safety reasons.
Position: AI developments that have significant societal impacts should be subject to democratic governance, while less impactful AI technologies could be governed by industry standards without direct democratic input because of economic, societal, safety, ethical, political, philosophical, legal, human rights-related, democratic, other governance-related, game theoretic, and AI alignment and safety reasons.
Position: AI governance should mirror the principles of a direct democracy, with continuous input and voting by the public on key decisions and policies because of economic, societal, safety, ethical, political, philosophical, legal, human rights-related, democratic, other governance-related, and AI alignment and safety reasons.
Position: Democratic governance of AI should be adaptive, with mechanisms in place to rapidly adjust regulations in response to technological advancements and societal feedback because of economic, societal, safety, ethical, political, philosophical, legal, human rights-related, democratic, other governance-related, and AI alignment and safety reasons.