Imo we got to the current state by harnessing GPUs for a 10-20x boost over CPUs. Well, and cloud parallelization, which is ?100x?
ASIC is probably another 10x.
But the training data may need to vastly expand, and that data isn't going to 10x. It's probably going to degrade.
Imo we got to the current state by harnessing GPUs for a 10-20x boost over CPUs. Well, and cloud parallelization, which is ?100x?
ASIC is probably another 10x.
But the training data may need to vastly expand, and that data isn't going to 10x. It's probably going to degrade.