Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
WebGPU is now available on Android (chrome.com)
240 points by astlouis44 on Jan 18, 2024 | hide | past | favorite | 72 comments


> Timestamp queries allow WebGPU applications to measure precisely (down to the nanosecond) how much time their GPU commands take to execute compute and render passes

> ...

> Due to timing attack concerns, timestamp queries are quantized with a resolution of 100 microseconds, which provides a good compromise between precision and security.

I don't have a particular need of nanosecond granularity timestamps for WebGPU- there are other parts of the web stack where I could really use better time measurement- but I understand the security concern and it's far better to be safe than sorry.

But they quote two wildly different granularities in the same article, within a paragraph of each other...


The former is a spec detail (the result is returned in ns) and the latter is an implementation detail (browsers currently quantize the result to 100us). That is a useful distinction since you can use WebGPU outside of the browser by embedding Dawn or wgpu into your own application, and there you should get the maximum resolution the spec allows for. Environments like Electron might also opt-out of that timing attack mitigation since they're intended to run trusted code.

I agree the article could have made that clearer though.


Yes indeed they mention how to opt out of quantization directly in Chrome if you'd like (at your own risk) using the Devtools.


New exciting web fingerprinting vector dropped.

As if we didn’t have a enough already.


Meanwhile sleep() in js land is more of a suggestion when it comes to accuracy. Browser standards are strange.

Of course I mean setTimeout/interval because js doesn't even expose a sleep function.


> To help you anticipate memory limitations when allocating large amounts during the development of your app, requestAdapterInfo() now exposes memoryHeaps information such as the size and type of memory heaps available on the adapter.

Oh nice, I was just complaining about that here the other day. The docs mention that browsers will probably guard that information behind a permission prompt to prevent it from being used for fingerprinting, but it's better than nothing.


Nice!

We just need Linux and iOS. And then we'll have somewhere around 80% support for WebGPU across all devices.

I'm getting my numbers https://web3dsurvey.com/webgpu

Android: 0.34%

Chromium OS: 78.15%

iOS: 0.09%

Linux: 0.75%

Mac OS: 54.43%

Windows: 77.96%


I'm pretty sure the numbers on web3dsurvey are skewed. AFAICT only sizes about webgl and webgpu development are surveyed. To get real numbers you need their survey script to run on popular non-techie sites right?


It does favor users who have likely have better than average graphics devices, but it still is likely relatively right - especially as the numbers get high. It is based on ~250,000 data samples in the last week.


Can you explain how you would know the data is right if you don't have actual data from a site popular with non-techies?

Like I go to a tech meetup it will generally be 95% male and mostly white and asian. If I surveyed anything there it would have very little relation to the real population. You list these sites

threekit.com webgpufundamentals.org james.darpinian.com realism-effects-obeqz.vercel.app gobattle.io. ict.moe.gov.om modelviewer.dev realism-effects-git-v2-obeqz.vercel.app dev.phaser.io redblobgames.com threejs.org axiomatic-inc.com weatherlayers.com alpha.gobattle.io. gobattle.io spookyball.com mrdoob.com streets.gl realism-effects.vercel.app phaser.io alpha.gobattle.io molstar.org demo.weatherlayers.com old.phaser.io webglfundamentals.org toji.dev jeeliz.com clicktorelease.com

Pretty much all of them are not sites any non-graphics person would visit. So how can it possibly be even close the correct? It doesn't matter that there 250k samples per week if those are nearly all programmers interested in 3D rather than the average non-techie


> Can you explain how you would know the data is right if you don't have actual data from a site popular with non-techies?

I didn't say it was absolutely right. Re-read my comment above.

> So how can it possibly be even close the correct?

I suggested in my previous comment that when the numbers are close to extremes they are more accurate. Basically as the survey approaches either 0% or 100% (look at iOS support for WebGPU above), it is indicative that the stddev of the distribution is small and thus any sampling bias is likely to not really have much of an effect. The more problematic numbers are those that are near the middle, like 50%, as the stddev is likely much higher and thus sampling bias can introduce more skew.

But generally this survey started out last year when I made the website as <1% for WebGPU support and it has climbed above 50%. That is real movement and is indicative as a whole that WebGPU support is climbing.

For the most part, you want to have WebGPU support to be >> 80% on this type of survey if you want to introduce a website that uses it exclusively without some type of fallback available.


I guess that's not the question I need answered as much as something more like "If I used feature X or require limit Y, what percent of the world market will be excluding".

Given the list of sites, I don't think the data will show me the long tall of low powered older phones that are used by the majority of people because the people visiting sites like those on the list are more than likely to be tech people who have newer tech.

PS, I'm not dissing the site or your effort. It's awesome that you put it together. Rather, I'm saying the numbers are not reliable. You can't say "50% of people can use feature x". You can only say "50% of people that visit sites like those listed can use feature x". I suspect that 50% is way way off. It's more like only 10% of the general population can use feature x"

Here's hoping you can get one or more mainstream sites to add your survey script.


According to [1] Android 12+ covers nearly 60% of all Android devices. That's more than I would have expected.

1: https://gs.statcounter.com/os-version-market-share/android/m... (Android 14 is still counted as "other")


if safari tech preview is anything to go by, it may come to iOS sooner or later

https://webkit.org/blog/14879/webgpu-now-available-for-testi...


Well done WebGPU team! Looking forward to the announcement one day that this has landed: https://github.com/gpuweb/gpuweb/issues/4195


Cool, there's consensus between APIs on how to expose "tensor cores" now? Very exciting! Although I think that relaxing memory limitations and providing more visibility and control there is even more important for running ML on the web right now. And harder to make progress on because there isn't a single team that clearly owns "all memory management".


>devices running Android 12 and greater powered by Qualcomm and ARM GPUs.

So... won't work on any exynos, since they have the AMD RDNA3 arch? Do I get that right?


Hi! I'm the Chrome dev that's been working on WebGPU's Android support. As jsheard said the older exynos devices will work because they're Mali-based. The newer RDNA3-based devices aren't enabled by default simply because our team hasn't been able to sufficiently test on them yet. Same goes for Tegra or PowerVR GPUs.

It's entirely a question of spending the time to ensure they're performing as expected (and probably implementing a few workarounds) and not a comment on the quality of the GPUs themselves.

That said, we know that these GPUs are in an increasing number of flagship devices, which makes them a higher priority for official support in future releases.


First of all thank for the effort.

Secondly, this was a big issue plaguing WebGL adoption, as contrary to native APIs, devices get blacklisted and telling common users to access browser flags is not an option for most products.

Hence why game studios are so keen on streaming instead.


It should work on slightly older or lower end Exynos chips, which have ARM Mali GPUs. Their switch to AMD RDNA was a fairly recent thing, and so far it has only been integrated into their flagship-tier parts.


I'm really looking forward to 2034, when WebGPU features will catch up to 2024.


By that time, it might even get supported by Chrome for Linux.


Why is linux supporting taking a while? I figured the underlying graphics subsystem on Android is Vulcan right? Wouldn't that also be the main graphics subsystem on Linux these days?


Testing/Validation and bugfixing. Just having Vulkan isn't enough to enable it by default, everything actually has to work right. Even for Android this is only for specific types of devices. You should be able to force enable it on Linux right now though. It's just not GA quality guaranteed.


I tried and tried and tried, Chrome 120 will always stick an undocumented Origin Trial for disabling WebGPU.


You know we’ll have all moved onto Romulan by then, leaving Vulkan and Metal behind - including WebGPU.

In seriousness though, WebGPU in Chrome with “non-free” Linux GPU drivers should work, no?

Edit: I see it’s still behind a flag


about:flags in Chromium

search for "accel"

Disable the blacklist for your GPU.


Which is exactly why WebGL never really took off for games like Flash did, versus native games, or now streaming.

Having drivers installed is not enough as the browser lords decided the computer isn't worthy of playing games.


Flash was a buggy crap which made lots of older computers spawn cycles like crazy and had zero accesibility for the blind. It deserverd to die.


Usually the only ones complaining are Linux users.

The rest enjoyed the games, to the point that Flash is being brought back thanks WebAssembly, with Unity and Flutter as spiritual sucessors.


Doesn't stop disablement done through --origin-trial-disable-feature=WebGPU and I have yet to figure how to drop that without recompiling Chrome.


Afaik it is possible only in unstable and beta, not in stable. That's why GP mentioned Chromium, not Chrome.

Which fully supports pjmlp's point in the sibling comment.


On ChromeOS.


On devices by selected vendors, and only on selected models.


What features?


Personally, the ones I'm most looking forward to:

* Subgroup operations

* Push constants (available in wgpu, but not WebGPU the spec/web impl)

* u64 + atomic image ops

* Mesh shaders

* Raytracing

* Binding arrays / descriptor indexing + device buffer addresses


Similar. I've done experiments with subgroups suggesting approximately a 2.5x speedup for sorting (using the WLMS technique of Onesweep). Binding arrays will be very helpful for rendering images in the compute shader. A caveat is that descriptor indexing is not supported on mid-old phones like Pixel 4, but it is on Pixel 6. I somewhat doubt device buffer address will be supported, as I think the security aspect is complicated (it resembles a raw pointer), but it's possible they'll figure out how to do it.


Looking back to 10 years long WebGL adoption, and WebGPU being based on 2015 features, that is pretty much spot on.


In 2034 it'll be as dead as Flash because of security issues.


Not really, that is not the problem of WebGPU. The worst you can do is crash the tab. With an unstable graphics driver, there might even be the option to crash the system but that's hardly a security issue, only an annoyance.


Historically any time an attack surface as big as WebGPU has been exposed, "the worst you can do is crash the tab" has not ever been true.

Also note that for an unstable graphics driver, the way you usually crash the system is by touching memory you shouldn't (through the rendering API), which is definitely something that could be exploited by an attacker. It could also corrupt pages that later get flushed to disk and destroy data instead of just annoy you.

Though I am skeptical as to whether it would happen, security researchers have previously come up with some truly incredible browser exploit chains in the past, so I'm not writing it off.


WebGL has been around for more than a decade and didn't turn out to be a security issue, other than occasionally crashing tabs. Neither will WebGPU be.


By exposing vulnerable graphics drivers to arbitrary web code, WebGL has allowed websites to take screenshots of your desktop (https://www.mozilla.org/en-US/security/advisories/mfsa2013-8...) and break out of virtual machines (https://blog.talosintelligence.com/nvidia-graphics-driver-vu...), to use two examples I found via a web search.


Very curious what you see as the problems with WebGPU currently. I’ve been tinkering with it slowly as it has a bit of a learning curve.


What will be curious about WebGPU getting wider Android deployment is if it results in reducing the effect of variation in the drivers, which very much remain a headache. For example, WebGL type API implementations have had a somewhat flexible idea about data sizes and layout which due to the nature of WebGPU are much less acceptable there. One of the big wins of Vulkan has been that it has levelled the playing field somewhat and poor drivers have less of an impact.

I think a lot of people will be disappointed by what proportion of devices currently in the wild actually successfully make this jump because it is under appreciated the extent to which shortcuts have been taken. I look forward to the day I never have to think about the Mali GLSL compiler ever again.


It will be same as ever.

The big difference is that on native APIs we can work around them.

On browser APIs, the device gets blacklisted end of story for those folks.


My team has been developing out Unreal Engine 5 support for WebGPU, for anyone interested.


Do you work at Epic or is this an external effort?


What kinds of challenges have you run into?


How do you run the task manager with Android Chrome?

Does Android Chrome have the per-tab hover card RAM use feature as desktop chrome?

From https://news.ycombinator.com/item?id=37840416 :

>> From "Manifest V3, webRequest, and ad blockers" (2022) https://news.ycombinator.com/item?id=32953286 :

>> What are some ideas for UI Visual Affordances to solve for bad UX due to slow browser tabs and extensions?

>> - [ ] UBY: Browsers: Strobe the tab or extension button when it's beyond (configurable) resource usage thresholds

>> - [ ] UBY: Browsers: Vary the {color, size, fill} of the tabs according to their relative resource utilization

>> - [ ] ENH,SEC: Browsers: specify per-tab/per-domain resource quotas: CPU


What’s the actual utility of this for anyone that isn’t trying to replace native code with web pages? Is this ever going to be worth the no doubt massive investment it required?


It should enable much more performant (and battery friendly) 3D content on the web. WebGL has a level of synchronization in the main render loop of the browser that is just not the right way to do it, and WebGPU fixes that.

Additionally it is more suited to GPU based compute, which can be used to accelerate neural network inferencing, though not quite as well as dedicated NN accelerators which are fairly common these days.

I would tend to agree that the business case for these things is not as strong as many would like though, and things have a distinct habit of ceasing to be interesting the moment they are widely achievable.


So there’s no real use case. Got it.


There is a very clear use case in the first sentence of their comment though?

Unless your stance is that WebGL itself had no real use case, which is just silly.


Why does a web page need GL?


I don't know, ask any online maps site.

Or, say, literally any number of e-commerce sites that give the user an interactive and/or customisable view of products and not just static imagery.

Or, say, any number of pages that embed complex data visualisations.

"Why does a graphical UI platform need performant graphics" is an incredibly tautological question.


It's likely to become the best way to run cross-GPU-platform gpu code in the medium term


That already existed via middleware engines.


such as?

Do you mean the clusterfuck that is matching carefully your compiler, ID, hardware, instruction set architecture, incompatible dependency versions, installers, package managers, etc.?

So far, WebGPU was the first and only time that I was able run Stable Diffusion on my own hardware.

https://websd.mlc.ai/


Unity, Unreal, Ogre3D, Open3D, Godot, Stride, Defold,....

Assuming you want to use 2024 hardware features in 2024.


These are game engines. What if I want to run GPGPU code in a cross platform way?


They didn’t say it would be the only way, just that it would be the best way


I feel like WebGPU actually holds some amount of promise as a cross-platform convenience. I'd agree that there's not a great reason to update your native code for this right now though.

If you're writing new gfx code though and are more familiar with web technology, there's definitely utility there. That's the bigger value prop: that people with web development skills can work on more pro (GPU-required) applications.


Perhaps it can be used to avoid the outrageous 30% app store fees.


"trying to replace native code with web pages? "

No one wants that. But many like to write their apps only for one plattform - and then still have them run allmost everywhere.

The web is the best we have to achieve this. And this will greatly improve the possibilities.

Edit: My app will soon finally use no more html elements. It is not a "webpage".


> No one wants that.

I very much do want that since the WebGPU API is far easier and nicer to use than Vulkan or OpenGL. Also, it makes apps much more accessible to distribute them over web, and it is much more secure to use web apps than native apps. Unfortunately WebGPU is way too limited compared to desktop APIs.


You say no one wants that, then you describe doing exactly that.

If your “app” runs in a browser window—a window presented by a browser engine—it’s fundamentally a web page. (They’re two distinct words.)


I .. hate those pedantic discussions. But here you go: a web page by common understanding is mainly something to look at. Page implies document. A web app is a bit more. (And many tech people hate it, that browsers can do more)

So no, I do not want to replace native code with a web page. But in some cases with web apps.


Can you have an electron app without HTML elements? A pure WebGPU + Webassembly program?


YEah, but why wouldn't you want HTML CSS to render your ui.

I'm going to revisit electron / nw.js for games again this year. Last time I tried 4-5 years ago I could not get smooth animation with request animation frame.


Performance.

I recommend pixijs.

But it depends what you do, smooth animations of some elements is possible with html. But in my case it got complex and html was the bottleneck. Now I have the same assets in Pixi and it runs around 100× faster. No more lags, no stuttering. No more html.

(Allmost, some static content is still HTML, but that is fine, as long as the DOM does not get modified)


I'm not sure I understand you.

Can you expand your comment somewhat?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: