Seems caught up in the connectionist "just 1m more neuron-equivalents" and "its the wrong model" arguments. Which is trench warfare off to to one side of the essential agreement: Be it lack of scale, or irrelevant models, it isn't emerging and it won't emerge any time soon.
Not a believer, so I tend to the "wrong model" side of things. I do think the orders of magnitude improvements in the quality of outcome are impressive but wishing for unicorns doesn't make them true, Mattel plastic not withstanding.
Altman measures "achievement of AGI" in purely economic terms. Nothing to do with "consciousness" or "intelligence". Once global revenue tied to it hits x00M$ (or xB$ if they want to keep pushing and "adjust for inflation"), we have "AGI". So here it's entirely out of context.
I think on the latest numbers they just need 25% returns on the big US center to get there.
Seems caught up in the connectionist "just 1m more neuron-equivalents" and "its the wrong model" arguments. Which is trench warfare off to to one side of the essential agreement: Be it lack of scale, or irrelevant models, it isn't emerging and it won't emerge any time soon.
Not a believer, so I tend to the "wrong model" side of things. I do think the orders of magnitude improvements in the quality of outcome are impressive but wishing for unicorns doesn't make them true, Mattel plastic not withstanding.
Altman measures "achievement of AGI" in purely economic terms. Nothing to do with "consciousness" or "intelligence". Once global revenue tied to it hits x00M$ (or xB$ if they want to keep pushing and "adjust for inflation"), we have "AGI". So here it's entirely out of context.
I think on the latest numbers they just need 25% returns on the big US center to get there.