> Timestamp queries allow WebGPU applications to measure precisely (down to the nanosecond) how much time their GPU commands take to execute compute and render passes
> ...
> Due to timing attack concerns, timestamp queries are quantized with a resolution of 100 microseconds, which provides a good compromise between precision and security.
I don't have a particular need of nanosecond granularity timestamps for WebGPU- there are other parts of the web stack where I could really use better time measurement- but I understand the security concern and it's far better to be safe than sorry.
But they quote two wildly different granularities in the same article, within a paragraph of each other...
The former is a spec detail (the result is returned in ns) and the latter is an implementation detail (browsers currently quantize the result to 100us). That is a useful distinction since you can use WebGPU outside of the browser by embedding Dawn or wgpu into your own application, and there you should get the maximum resolution the spec allows for. Environments like Electron might also opt-out of that timing attack mitigation since they're intended to run trusted code.
I agree the article could have made that clearer though.
> To help you anticipate memory limitations when allocating large amounts during the development of your app, requestAdapterInfo() now exposes memoryHeaps information such as the size and type of memory heaps available on the adapter.
Oh nice, I was just complaining about that here the other day. The docs mention that browsers will probably guard that information behind a permission prompt to prevent it from being used for fingerprinting, but it's better than nothing.
I'm pretty sure the numbers on web3dsurvey are skewed. AFAICT only sizes about webgl and webgpu development are surveyed. To get real numbers you need their survey script to run on popular non-techie sites right?
It does favor users who have likely have better than average graphics devices, but it still is likely relatively right - especially as the numbers get high. It is based on ~250,000 data samples in the last week.
Can you explain how you would know the data is right if you don't have actual data from a site popular with non-techies?
Like I go to a tech meetup it will generally be 95% male and mostly white and asian. If I surveyed anything there it would have very little relation to the real population. You list these sites
Pretty much all of them are not sites any non-graphics person would visit. So how can it possibly be even close the correct? It doesn't matter that there 250k samples per week if those are nearly all programmers interested in 3D rather than the average non-techie
> Can you explain how you would know the data is right if you don't have actual data from a site popular with non-techies?
I didn't say it was absolutely right. Re-read my comment above.
> So how can it possibly be even close the correct?
I suggested in my previous comment that when the numbers are close to extremes they are more accurate. Basically as the survey approaches either 0% or 100% (look at iOS support for WebGPU above), it is indicative that the stddev of the distribution is small and thus any sampling bias is likely to not really have much of an effect. The more problematic numbers are those that are near the middle, like 50%, as the stddev is likely much higher and thus sampling bias can introduce more skew.
But generally this survey started out last year when I made the website as <1% for WebGPU support and it has climbed above 50%. That is real movement and is indicative as a whole that WebGPU support is climbing.
For the most part, you want to have WebGPU support to be >> 80% on this type of survey if you want to introduce a website that uses it exclusively without some type of fallback available.
I guess that's not the question I need answered as much as something more like "If I used feature X or require limit Y, what percent of the world market will be excluding".
Given the list of sites, I don't think the data will show me the long tall of low powered older phones that are used by the majority of people because the people visiting sites like those on the list are more than likely to be tech people who have newer tech.
PS, I'm not dissing the site or your effort. It's awesome that you put it together. Rather, I'm saying the numbers are not reliable. You can't say "50% of people can use feature x". You can only say "50% of people that visit sites like those listed can use feature x". I suspect that 50% is way way off. It's more like only 10% of the general population can use feature x"
Here's hoping you can get one or more mainstream sites to add your survey script.
Cool, there's consensus between APIs on how to expose "tensor cores" now? Very exciting! Although I think that relaxing memory limitations and providing more visibility and control there is even more important for running ML on the web right now. And harder to make progress on because there isn't a single team that clearly owns "all memory management".
Hi! I'm the Chrome dev that's been working on WebGPU's Android support. As jsheard said the older exynos devices will work because they're Mali-based. The newer RDNA3-based devices aren't enabled by default simply because our team hasn't been able to sufficiently test on them yet. Same goes for Tegra or PowerVR GPUs.
It's entirely a question of spending the time to ensure they're performing as expected (and probably implementing a few workarounds) and not a comment on the quality of the GPUs themselves.
That said, we know that these GPUs are in an increasing number of flagship devices, which makes them a higher priority for official support in future releases.
Secondly, this was a big issue plaguing WebGL adoption, as contrary to native APIs, devices get blacklisted and telling common users to access browser flags is not an option for most products.
Hence why game studios are so keen on streaming instead.
It should work on slightly older or lower end Exynos chips, which have ARM Mali GPUs. Their switch to AMD RDNA was a fairly recent thing, and so far it has only been integrated into their flagship-tier parts.
Why is linux supporting taking a while? I figured the underlying graphics subsystem on Android is Vulcan right? Wouldn't that also be the main graphics subsystem on Linux these days?
Testing/Validation and bugfixing. Just having Vulkan isn't enough to enable it by default, everything actually has to work right. Even for Android this is only for specific types of devices. You should be able to force enable it on Linux right now though. It's just not GA quality guaranteed.
Similar. I've done experiments with subgroups suggesting approximately a 2.5x speedup for sorting (using the WLMS technique of Onesweep). Binding arrays will be very helpful for rendering images in the compute shader. A caveat is that descriptor indexing is not supported on mid-old phones like Pixel 4, but it is on Pixel 6. I somewhat doubt device buffer address will be supported, as I think the security aspect is complicated (it resembles a raw pointer), but it's possible they'll figure out how to do it.
Not really, that is not the problem of WebGPU. The worst you can do is crash the tab. With an unstable graphics driver, there might even be the option to crash the system but that's hardly a security issue, only an annoyance.
Historically any time an attack surface as big as WebGPU has been exposed, "the worst you can do is crash the tab" has not ever been true.
Also note that for an unstable graphics driver, the way you usually crash the system is by touching memory you shouldn't (through the rendering API), which is definitely something that could be exploited by an attacker. It could also corrupt pages that later get flushed to disk and destroy data instead of just annoy you.
Though I am skeptical as to whether it would happen, security researchers have previously come up with some truly incredible browser exploit chains in the past, so I'm not writing it off.
WebGL has been around for more than a decade and didn't turn out to be a security issue, other than occasionally crashing tabs. Neither will WebGPU be.
What will be curious about WebGPU getting wider Android deployment is if it results in reducing the effect of variation in the drivers, which very much remain a headache. For example, WebGL type API implementations have had a somewhat flexible idea about data sizes and layout which due to the nature of WebGPU are much less acceptable there. One of the big wins of Vulkan has been that it has levelled the playing field somewhat and poor drivers have less of an impact.
I think a lot of people will be disappointed by what proportion of devices currently in the wild actually successfully make this jump because it is under appreciated the extent to which shortcuts have been taken. I look forward to the day I never have to think about the Mali GLSL compiler ever again.
What’s the actual utility of this for anyone that isn’t trying to replace native code with web pages? Is this ever going to be worth the no doubt massive investment it required?
It should enable much more performant (and battery friendly) 3D content on the web. WebGL has a level of synchronization in the main render loop of the browser that is just not the right way to do it, and WebGPU fixes that.
Additionally it is more suited to GPU based compute, which can be used to accelerate neural network inferencing, though not quite as well as dedicated NN accelerators which are fairly common these days.
I would tend to agree that the business case for these things is not as strong as many would like though, and things have a distinct habit of ceasing to be interesting the moment they are widely achievable.
Do you mean the clusterfuck that is matching carefully your compiler, ID, hardware, instruction set architecture, incompatible dependency versions, installers, package managers, etc.?
So far, WebGPU was the first and only time that I was able run Stable Diffusion on my own hardware.
I feel like WebGPU actually holds some amount of promise as a cross-platform convenience. I'd agree that there's not a great reason to update your native code for this right now though.
If you're writing new gfx code though and are more familiar with web technology, there's definitely utility there. That's the bigger value prop: that people with web development skills can work on more pro (GPU-required) applications.
I very much do want that since the WebGPU API is far easier and nicer to use than Vulkan or OpenGL. Also, it makes apps much more accessible to distribute them over web, and it is much more secure to use web apps than native apps. Unfortunately WebGPU is way too limited compared to desktop APIs.
I .. hate those pedantic discussions. But here you go: a web page by common understanding is mainly something to look at. Page implies document. A web app is a bit more.
(And many tech people hate it, that browsers can do more)
So no, I do not want to replace native code with a web page. But in some cases with web apps.
YEah, but why wouldn't you want HTML CSS to render your ui.
I'm going to revisit electron / nw.js for games again this year. Last time I tried 4-5 years ago I could not get smooth animation with request animation frame.
But it depends what you do, smooth animations of some elements is possible with html. But in my case it got complex and html was the bottleneck. Now I have the same assets in Pixi and it runs around 100× faster. No more lags, no stuttering. No more html.
(Allmost, some static content is still HTML, but that is fine, as long as the DOM does not get modified)
> ...
> Due to timing attack concerns, timestamp queries are quantized with a resolution of 100 microseconds, which provides a good compromise between precision and security.
I don't have a particular need of nanosecond granularity timestamps for WebGPU- there are other parts of the web stack where I could really use better time measurement- but I understand the security concern and it's far better to be safe than sorry.
But they quote two wildly different granularities in the same article, within a paragraph of each other...