5573 shaares
132 private links
132 private links
One of the biggest problems with measuring AI progress is the ambiguity of measuring intelligence itself.
AGI is treated as a milestone we have yet to cross, but there is no central definition of AGI.
Depending on who you ask, AGI is achieved when a system:
- can fool humans into thinking it is one of them, in other words, pass a Turing Test
- demonstrates creativity (Springer)
- can develop new skills (DeepMind)
- solves unfamiliar tasks (DeepMind)
- is generally capable across domains (IBM)
- is superior to humans in intelligence (Scientific American)
- outperforms humans economically (OpenAI Charter)
- can independently solve complex problems without human oversight (DeepMind)
Even with the lack of consensus, I can confidently say we have AGI, because the criteria above has been met.