We are extremely subjective in defining “intelligence,” let alone “artificial intelligence” (AI). However, there’s a general consensus around the definition of “intelligence.”
According to Alex Wissner, intelligence is a force, F, that acts so as to maximize future freedom of action. To most people, AI is basically robots walking around looking like humans. For most AI scientists, subjectivity still carries a huge weight in defining AI. In most cases, AI is still compared to human intelligence. However, what if we don’t recognize AI when we see it?
While many used to argue that Bitcoin itself is not a branch of AI, the majority of folks agreed, that it is at least the longest-standing independent artificial life (AL) that owns itself. In a previous article for ITU, I wrote about a branch of AI called Multi Agent Systems (MAS).
The intent is for folks to collaborate and increase AI security and ensure it is well thought out in advance, towards a true “good” for all homo sapiens.
Blockchain is an evolved version of MAS, specifically Bitcoin Blockchain is permissionless, borderless, resilient to attacks, driven by the crypto-economy and incentivizes millions of people to work on it and its evolution. Its primary set of devices compete for resources. It is byzantine fault tolerant, and has strong preventions from sybil attack.
The technology continues to evolve and prevail. Certain advances in network protocols may allow it to become mainstream and efficiently scalable, while maintaining deep decentralized governance.
Can we learn from Blockchain and apply it to other autonomous AI systems?
The AI for Good Global Summit, which will take place from 15 to 17 May, in Geneva, Switzerland, will be one of the most influential AI summits in the world. For thousands of years, if we take a look at wars between two or more groups of people, all parties involved appeared to have “good” intentions to their own group. Yet, the “good” on one side rarely ever equals “good” across the board. How would a group of 300+ top AI scientists in Geneva ensure that “AI for Good” is in fact really good?
What if you successfully make AI good for every human being and then you pass the control of it to someone else who then changes the “good” part? What if the attack on a centrally controlled AI is initiated from within? Can you imagine the possibility that the power of AI we are building is maliciously used against the next generation of homo sapiens by other groups of homo sapiens? Or against a selected group? Could this be the most pressing issue that every human being should be worried about?
As part of the AI for Good Global Summit, the AiDecentralized Track will play a key role in introducing some of the attack vectors that most AI Practitioners are unaware of, and possibly a route towards a solution using some of the science learned from the Blockchain evolution. AiDecentralized is an ACM global initiative to bring 870 000 AI practitioners in the world together with 280 000 blockchainers and cryptographers. The intent is for folks to collaborate and increase AI security and ensure it is well thought out in advance, towards a true “good” for all homo sapiens.
Continuing to cooperate to ensure security
Autonomous Decentralized Governance is a security model, and like any security model, you are as good as your weakest link. It is not possible to have one central control system; it will be exploited eventually. If we have learned anything from history, it is the beast in each and every one of us that we should fear the most, not AI replacing our jobs. Quoting Yuval Noah Harari: “Sapiens rule the world, because we are the only animal that can cooperate flexibly in large numbers.” We should continue cooperating and ensure proper security is met while creating the most powerful instruments in the history of humanity.
Send this to a friend