If you understand the concept of an "API", CUDA is an API for the chip.
If you understand the concept of "firmware", CUDA is the firmware, like in routers.
Basically, CUDA is a library available on top of the basic chip. The library is not the application. That's why CUDA is part of the chip, not of the AI.
Anybody can write CUDA, it is not that big a deal.
The "Software ecosystem" is like the Router Control Panel on your PC from where you can turn the router on/off, reboot it for problems. It is stuff that's nice to have, but not a big deal.
Whoever thought of the $Cisco(CSCO)$ analogy, hit it on the head. Just like Cisco routers have firmware, AI chips have CUDA. Just like routers had Cisco-Juniper wars, the AI chips are having NVDA $Intel(INTC)$ $Advanced Micro Devices(AMD)$ wars.
Just like internet bandwidth increased by leaps and bounds, startup are reporting stunning improvements in AI chip performance. This AI thing is just getting started folks, put on your seatbelts. If Internet is any guide, the chip performance will grow thousands-fold.
Oh, yeah, and just like Cisco, we have the exact same kind of NVDA bubble.
Comments
U retarded? All ur post all bearish funny guy