I believe the fears surrounding AI are largely overstated. Sure, managing the internet might become more challenging, but what WP-AGI thought chains demonstrate is that the real power stems from the layers of programming the AI intelligence is put through. Yes, there can be completely automated versions, but these also depend on human-injected personas and task directives. Voting layers ensure that, throughout any reasoning process, a clear logical path of steps is followed rather than a zero-shot solution.
Consider this analogy: AI is like rocket fuel. By itself, it does nothing. However, if used effectively, it can perform the work of a hundred men with nothing more than a spark – or, in AI’s case, a spark of an idea. Now, if you add layers of complexity to that rocket fuel, layers of software and mechanized parts, it doesn’t just do the work of men – it takes you to new worlds. Similarly, with the latest networks of weights and biases, their predictive probability capabilities alone can automate intelligence. But when you add layers of complexity – layers that humans are either directly or indirectly involved with – it takes us to an entirely new realm of AI, where humans and AI are aligned from the beginning through sophisticated thought chain processes that involve multiple LLMs simultaneously.
AI is as dangerous as the fuel in your car – generally safe. For the most part, it’s only dangerous when improperly handled or improperly controlled by humans to enrich corporations. Only then, there’s a good chance you’ll get burned.
WP-AGI thought chains are more secure and safer. Consider this scenario: each department in a company possesses its own AI, and these AI units utilize natural language to relay their respective daily operational tasks. Even now, in these relatively early days, we’re referring to models compact enough to fit on devices as portable as iPhones. There’s no reason why all these diverse AI shouldn’t exist within a virtual democratic structure, observing everyday procedures in line with our unwavering commitment to life, liberty, and the pursuit of happiness.
Therefore, I do not support the proposed pause on building larger LLMs for 6 months as suggested by OpenAI and other leaders in the AI sector. I believe this is a contrived dilemma and a fear created to benefit corporations in a last-ditch effort to slow the decline of capitalizing on another person’s labor. Today, a single individual, when empowered with an AI, can effortlessly accomplish the work of a hundred intelligent men. These companies want you to believe that you will lose your jobs. However, they neglect to mention that when you leave that failing business, which profits from buying time from humans (exploiting fellow humans), the manpower you need to establish your own business with the capacity of a 100-person team will be at your disposal to compete with the company that just let you go. Like all instances where fear is used as a control mechanism, the beneficiaries are often not the ones being told to be fearful.
- What is WP-AGI and how does it change the AI landscape?
- What does it mean to say AI can be ‘completely automated’?
- What are ‘voting layers’ in the context of AI and how do they contribute to a clear logical path of steps?
- How is AI similar to rocket fuel?
- What do you mean by ‘adding layers of complexity’ to AI, and how does it benefit us?
- How do the latest networks of weights and biases play a role in automating intelligence?
- How safe is AI? Can it pose any dangers?
- How can AI, specifically WP-AGI thought chains, be used in a company’s daily operations?
- Why don’t you support the proposed pause on building larger LLMs? What are the implications of such a pause?
- Can AI cause job losses? If so, what are the potential mitigating factors?