AGI? It depends on your philosophy.

If you believe in determinism, you are likely to believe AGI (Artificial General Intelligence, Superintelligence) is close or already here. Determinism is the philosophy that free will is an illusion and that our choices are predetermined by the conditions and circumstances that precede them. In other words, if you can consider all variables you can build an if-then-else statement large enough that all decisions and behavior can be predicted.
AI’s current primary path is induction logic. It is applying compute to as much data as possible to isolate patterns…building a mega if-then-else statement.
If you do not believe in determinism and believe in free will, then you are less likely to believe that AGI is close. You believe that it is deduction versus induction logic what makes the human mind (sentience) unique.
Deduction logic starts with general premises and seeks to validate those premises via specific circumstances (real world data). In other words, you build a conceptual model based on general observation or just thinking and then prove it is valid by applying it to specific situations. It is the opposite direction of induction.
Today, AI struggles in the realm of deduction. There is a rising question of whether the current architecture (LLM’s, …) can ever be deductive. That is not to say that AGI is impossible. I just wonder if it is possible based on our current technical approach to AI.
I admit this is a very oversimplified view. I have been thinking about it for some time. A growing theme in articles and posts about AI seem to be in alignment with this view. So, I decided to share it more broadly.
What do you think?


