Did you even click through the link ? The intern is blind and will work towards building a "completely free and fully accessible distribution of GNU/Linux". I don't understand why one need to be so sarcastic about a noble deed.
I can't even make the mental connection between the title/article and this comment. Not even the racist/hateful mental connection. Am I missing something?
Physical quality control issues. There was a big problem a while back with Nvidia integrated graphics failing pretty often that apparently had to do with bad potting material material in the chip modules that didn't have the same thermal expansion properties as the solder that Nvidia had switched to. A large number of MacBooks had to be replaced, costing a lot of money and consumer confidence. Google "bumpgate" for more information.
And while Nvidia's Kal-El is really nifty, I'd rather have an OMAP 5 in a mobile device. Four A15s at 40nm will probably drain too much power to get any sort of battery life while playing that demo. 28nm should be much better.
He means on the iPhone and iPad. Nvidia acquired PortalPlayer which was the chip used in the iPods, but Apple decided not to use them anymore and developed their own chip. Meanwhile PortalPlayer chips eventually became Tegra.
Also due to a lawsuit that was recently settled with Intel, Nvidia was unable to provide Nehalem chipsets to Apple, meaning that Apple ended up shipping Nvidia's Core 2 Duo chipset, but that won't last forever and Apple will eventually need a different partner (ATI). Nvidia announced earlier this year that they are developing their own CPUs based on ARM for desktops/notebooks, who knows maybe that will end up in Apple notebook line.
Apple did not "develop their own chip" in any meaningful sense, especially not in back in 2007 when Nvidia acquired PortalPlayer. The iPhone used an off-the-shelf Samsung part (the S5L8900), and PortalPlayer was in the business of audio SoCs for PMPs. The first generation Tegra was more than two years away at the release of the original iPhone.
Even if Apple had wanted to contract PortalPlayer for iPhone, they could not have.
Since then, they've mostly kept to off-the-shelf blocks, even the A4 only has minor restructuring of the blocks at the SoC level, the GPU is a standard SGX block.
If Nvidia is extracting a premium over other ARM vendors it's unlikely any kind of mobile deal with Apple would happen. Not sure they'd be crazy about rebranding their mobile flagship with an Apple logo either.
> If Nvidia is extracting a premium over other ARM vendors it's unlikely any kind of mobile deal with Apple would happen.
Not really.
> Not sure they'd be crazy about rebranding their mobile flagship with an Apple logo either.
Why would they have to rebrand anything? The iPhone and 3G SoCs were not branded, and the 3GS only had minor branding (an Apple logo on the top 20%). The A4 was the first iPhone SoC with truly significant rebranding.
That's the problem for nVidia as a supplier for Apple: iOS hardware is Apple branded, and most people do not know what is inside. nVidia wants to build their own brand, but Apple has no interest in that.
Sandy Bridge is good compared to AMD chips, but it's overrated compared to ARM chips. ARM chips are still far ahead in power consumption. Besides Intel, won't even have a Sandy Bridge Atom by late 2013.
Thats the bet nvidia made when they lost the chipset business. I'd say they are more interested in their own CPU specially now that Win8 is compatible with it.
the website can probably be used for evil...I was thinking you could send a crush notification to your spouse, name 5 of his/her coworkers, and see if he/she tells you about it...
Soooooo cool that they realized that every keyboard has a windows key!!! Instead of adapting the ms shortcuts, they find different shortcuts which do the same, how great is that?!
No, we don't. GDDR5 is faster than current cards can handle. The bottleneck is in the memory controllers themselves - beyond an effective data rate of 5Ghz, the memory controllers get really complicated and just plain large.
A Radeon 6970 ships with GDDR5 that can run at 6Ghz effective rate, but it only runs it at 5.5Ghz (though there are factory-overclocked cards that reach at least 5.7Ghz). NVidia's cards tend to run their memory at no more than 4.5Ghz effective, though they compensate by using a wider memory bus (which accounts for the odd-looking memory sizes).
Upper-mid-range cards built off smaller cores tend to have higher core clocks, but much lower memory speeds. Halving the number of compute units decreases the demand for memory bandwidth enough that it's not crippling when you use a memory controller that's half the size and can only run at 75% the speed, which saves the manufacturer a lot of money.
Just wondering is there a framework like Rails written in JS? Using JavaScript and V8 on the server side makes a lot of sense since V8 is much faster then the Ruby interpreter..
Node isn't really a framework. It's a server-side runtime environment for javascript. You could totally do something cool with it, Express js providing controllers, some sort of orm to your favourite DB providing Models and a template engine for views. Or, develop a backbone.js app for your frontend and make it talk to the api exposed by your controllers....no easy full stack like rails...yet...but it'll come, however I can't imagine it'll look too much like rails.