Some people are mentioning that this doesn't work on iOS, and that's because Apple doesn't have WebGL available in mobile versions of Safari.
But it is. This post by Nathan de Vries [0] shows how you can hack the UIWebView to enable WebGL contexts in your iOS apps. You have to get a little more creative these days (the compiler has gotten stricter about private types and functions), but all it takes is something like this:
-(void)setWebGLEnabled:(BOOL)enableWebGL forWebView:(UIWebView *)uiWebView
{
id webDocumentView = [uiWebView performSelector:@selector(_browserView)];
id backingWebView = [webDocumentView performSelector:@selector(webView)];
[backingWebView performSelector:@selector(_setWebGLEnabled:)
withObject:[NSNumber numberWithBool:enableWebGL]];
}
It should go without saying that code like this will not make it into the app store.
1. so they could be prepared to rapidly enable it if market exigencies required them to (say, if webGL experiences for whatever reason started blowing up on competitor platforms).
2. they have some other, grander plan behind webGL that we don't know about yet. Maybe someday they want Xcode to be the premier web app dev tool
Remember this is from the company that developed Mac OS X in parallel for PowerPC and x86 "just in case"...clearly they plan for the future
This specific demo still won't work on iOS devices predating the A7's GPU, due to OpenGL extension(s) used. (Doesn't work on my iPhone 5 even with WebGL enabled.)
It would be great if Apple would activate WebGL on iOS!
Officially only available through iAd on iOS 4.2 and higher, for all devices
except for 2nd Gen iPod Touch or iPhone 3G and earlier. However, there is a tweak
for jailbroken devices to enable functionality for Mobile Safari and all other
WebKit browsers.
If you'd asked me three years ago, I never would have guessed it would turn out like this. We have Internet Explorer supporting WebGL on ARM-based Windows tablets manufactured by Microsoft itself, yet the iPad still lacks WebGL. It's like bizarro world.
One gets the impression that Apple feels WebGL is perhaps too much power in a cross-platform package. If you need to do 3D-accelerated GUIs or games on iOS today, you build an Objective-C app for the App Store because there's no other way to access those APIs. Is Apple protecting their lock-in?
If WebGL worked in UIKit's WebView, you could just wrap your WebGL game with PhoneGap and use StoreKit for ripping off children... I mean, selling exciting in-app upgrades.
I'm not convinced the App Store is a meaningful marketing channel anyway, except for the 0.001% of game developers that can afford to buy downloads and then happen to get lucky in the App Store curation roulette.
Without knowing the details, I assume that WebGL poses a high security risk.
Considering that Mobile Safari has access to lots of confidential data (eg. iCloud keychain), and runs in a less restricted sandbox (can make data pages executable), that's probably the last place where Apple wants to try out some new technology.
EDIT: It's weird how iOS is now arguably the most secure mobile platform, after all the shit they got from Blackberry in the beginning :)
Security problems were pointed out to the WebGL guys a very long time ago and continue to be pointed out - they have not all been addressed (for whatever reasons - i don't want to imply that any of them are impossible to resolve) although extensions exist or have been proposed to resolve the more serious concerns.
I'd be more inclined to think that Apple don't want to include WebGL for other reasons though. Their track record with technology is much less than encouraging...
This demo throws an error for me on iOS - it's missing the OES_texture_float extension. Looks awesome on my laptop. If someone has a jailbroken current-gen iPhone or iPad and wants to try it out, Apple's docs say that extension is supported on the A7 GPU driver [1]
There's not a tweak currently in Cydia for WebGL, because the last guy to do that was naughty [2] and nobody's published a replacement.
Both rpetrich and I have open-source WebGL enabler tweaks if anyone's interested: [3] and [4]
(I'm not sure if his works on iOS7, as the last commit was three years ago.)
I dunno, I'm not really crazy about the prospect of random web sites murdering my phone's battery with superfluous WebGL animations. Remind me again why web pages need the ability to display realtime 3D graphics?
I don't like a lot of what Apple does, but I'm glad somebody isn't thrilled with the idea of "the browser" becoming the world's most Lovecraftian cross-platform runtime, even if it's only out of self-interest.
If a web site wants to murder your battery with graphics, it can already do it with an overload of CSS3 animations and reckless JavaScript drawing in Canvas elements...
Cute comparison, but it's not really fair. The web was quite obviously more useful than Gopher, while WebGL offers no improvements to the web for its most important tasks of being the standard tool for communication and the interface to most of humanity's services. Somehow the web managed to be "good enough" for those purposes for 20+ years without WebGL.
I think a better comparison is to the Java applets of the late 90s and early 2000s: Cool demos, cool games, totally pointless and enormous security vulnerability. But hey, the Runescape devs made a lot of money off of their Java-applet MMO back in the day, and someone will probably do the same soon with WebGL, and that's what the web's all about.
Webgl is very important to the development of data visualization, educational and scientific simulations and explorable explanations. There's a lot of room for interesting applications in this space. See redblob games for an example of what I mean specific to teaching game algorithms.
Oh ok, let's keep a gaping security vulnerability open in the one runtime that everyone demands the ability to shove heaps of untrusted code into, just so that people can see the occasional really cool demo that gets posted on HN every few months run at 10 FPS on their phone.
There are holes in every graphics driver and OpenGL implementation out there (it's practically unavoidable, as it's a fairly low-level API). To expose that layer of an operating system to any random webpage a user clicks on is utter lunacy.
"So don't visit those sites." That's like saying your operating system's security policy is "just don't go to bad sites and you won't get viruses!"
@pavlov I'm not happy about CSS3 animations or the abundance of Javascript you see today either, but that doesn't excuse people to pile even more ridiculous and insecure crap onto "the browser."
That's totally a legitimate criticism, and one that I hadn't heard before. Complaining about your phone's battery life when you can just as easily not visit the site has a lot less weight to it.
I was adding it to my original comment when the driveby downvoters arrived.
>Complaining about your phone's battery life when you can just as easily not visit the site has a lot less weight to it.
Do you really think that advertisers won't latch on to WebGL if it becomes widely adopted? I'm not "complaining" about the possibility of seeing some WebGL on $game_site, I'm worried about the inevitability of the banner ads that I already can't block on my phone suddenly becoming filled with eyecatching, battery-wasting 3D graphics, just as banner ads these days already use CSS3 animations to the same effect. I don't see why that isn't a valid concern.
Also, it's a matter of principle for me that apparently web developers don't agree with. I like the web as hypertext + the dumb terminal of our time. I don't see how things like WebGL benefit anybody but game developers and advertisers. Sometimes I feel like I'm the only one that sees the value in keeping the dumb terminal and the cross platform application API separate. We need both. There should be a simple way to run a game on all platforms (err, ignore for the moment that it already exists and is called SDL), and there should be some kind of easy-to-use "dumb" interface to society, but we're doing the world a disservice to combine the two.
Maybe I'm crazy, but I think certain vested interests are determined to turn the former into the latter, because they want to have their advertising pie and eat Apple's/Microsoft's/etc's app store pies as well. Maybe they were from the beginning, what with the talk of Netscape being at war with Microsoft.
The web is practically necessary to lead a normal life at this point. That people would focus not on taking the core of what makes the web actually matter for communication and as interfaces for the important services in our lives, making it simpler and more secure and more portable and easier to develop for, but instead on making it harder and harder to get down to that core, seems almost as wrong to me as banks in South Korea that require IE6.
That's the kind of principle I'm talking about. I understand if you don't agree with me; a lot of people have a lot invested in the web these days. However, I also don't see why I should have to bite my tongue about it.
@pyalot2 it's not FUD, because more code always means a larger attack surface, and OpenGL implementations happen to be a particularly large and historically poorly tested source of code. Bounds checking buffer accesses is an improvement but it doesn't magically make the implementations free of bugs. Check out this article from a while back on the state of OpenGL implementations for common chipsets used in Android devices, and tell me with a straight face you trust random webpages to talk to them:
That's absolutely not true; the security issues raised here are serious issues that resulted in a bunch of changes to the WebGL spec in order to alleviate them. Running untrusted code on the GPU is dangerous.
Not true how? I agree it's somewhat risky -- running untrusted code on the CPU is dangerous too and there have been bugs in sandboxing of JS/java/flash. Specs and implementations get tightened up, life goes on until the next bug.
This is where I'm at. What's the point of WebGL from Apple's perspective? If a developer wants to create a game, I'll just use OpenGL or CoreGraphics to do it.
The click trough rate from a website to a downloadable/appstorable package until it's installed, found, opened and used is around 1%. The click trough rate from a website to another website is around 50%.
Writing a native application requires you to port that application to every conceivable app-store and platform in order to get it to run most places, this includes, but is not limited to: Writing it once in C++, once in ObjectiveC, once In java, package it for x86-64 windows classic, package it for x86-64 windows metro, once for x86 windows classic, x86 windows metro, Wart, Android, iOS, RPM, DEB, etc.
A native app also requires you to come up with a solution to windowing (on windowing platforms, such as WGL, GLX etc.), sound (alsa, openal, directsound etc.), input (xinput (X), xinput or directinput (windows), cocoa (osx) etc.), etc. etc.
It also requires you to come up with a solution on how to build guis, which a lot of games do these days by simply embedding webkit...
It also requires you to come up with a solution to scripting, which is already built into browsers, but nevertheless, many embedd anglescript, lua, etc.
Etc. etc. etc.
Why do WebGL? Maybe think about that again, really carefully.
What's the point of doing a browser at all then? Or email? Or a new device? Or having a company in the first place? Or inventing anything?
How can you tell a tech company is dead and done for? It's when they stop pushing the boundaries, when they stop serving their users, when they stop inventing, when they stop competing. Because, they think they can afford it, famous last words of countless once mighty giants.
That's more than a bit of a stretch. More like, Apple cares about the security of their sandbox and having exclusive control over the sale of software on their platform, and doesn't see the need to include a feature that a) practically guarantees security vulnerabilities, b) weakens their control over content for iOS, and c) users don't really care about at the moment.
Jailbreakers have exploited vulnerabilities in FreeType of all things, but who cares, let's just give them more ammunition! I'm sure it will work out fine.
Apple: Dead and done for because they made a sane decision. You heard it here first, folks.
Author here. I've done much more awesome stuff with WebGL since I made this demo but it's for my startup and is not public yet. We're actually looking for a backend engineer with graphics experience at the moment. Send me an email at evan@figma.com if you're interested.
I really liked your demo and I'd like to see what your WebGL startup will do. Is http://figma.com/ the right place to sign up to be notified when it launches?
Edit: You might want to add some more visual feedback (e.g., blanking the email input field or hiding it) when you enter your email successfully on the front page of figma.com. I had to launch Firebug to make sure it accepted my email with a 200 OK because that check mark icon appearing briefly while the email field's contents stayed the same seemed suspiciously like a silent request failure.
Simple OpenGL question: Can the refractions/caustics be computed entirely in the shaders? Or do you have to do some of the processing in the CPU* and pass the results back to the shaders? All the shaders I've seen are "local," in the sense that they only have access to the interpolated vertex data for their polygon, plus whatever uniforms have been set. It seems to me that things like caustics would require non-local information. Is that non-local information passed via a uniform? If so, I wonder what the data format of that uniform is.
* I know that shaders aren't guaranteed to execute on the GPU, depending on the OpenGL implementation. But for simplicity let's just assume they do.
Author here. The caustics in this demo are computed entirely on the GPU and are approximated with a mesh. The shader for the caustics needs the ratio of the projected area to the original area for each triangle to compute the brightness. You're right that a vertex shader can't access whole triangles (you usually use a geometry shader for that, which WebGL doesn't have) but you can still get the ratio of the triangle areas in this case by using the screen-space derivative functions dFdx() and dFdy() in the fragment shader. Reflections and refractions can also be done in entirely in the fragment shader for simple scenes like this one with standard raytracing intersection tests.
> you can get "non-locality" via texture lookups inside the shader code.
So you define a way to encode arbitrary non-local data as texture data, correct? That is, you're hacking textures as a store for arbitrary data instead of the image data that textures were originally designed to store?
in this case, with only 5 planes and one sphere, you can pass the whole scene into the shader as uniforms.
If you have arbitrary scenes, you can hack around this by estimating the water surface area where each triangle would be visible and then generate a water surface quadrilateral for each submerged triangle to raycast it.
http://www.youtube.com/watch?v=oWrUvJYzRlQ
What is the future of WebGL? I have seen cool demos but is this the way forward. So one has a game to develop or a complicated visualization, is it worth investing time in learning WebGL or try to use Canvas directly.
If you just want to make a game, you may not want to bother learning WebGL directly. You can use an engine like PlayCanvas[1]* to build a game and you don't have to worry about the low-level stuff.
WebGL is essentially a specialized OpenGL-ES based context of canvas. HTML5 canvas in its usual context is only for 2D. You can render 3D on it but you'll essentially be creating your own software rendering engine from scratch.
So then my question is what is the future of WebGL?
I could see canvas extended to have 3D operations. Like say draw point takes 3 coordinates now. I can see a third party library that does 3D but projects it down to 2D and uses regular 2D commands on the canvas, or I see supporting WebGL interface and writing OpenGL style code.
> I could see canvas extended to have 3D operations. Like say draw point takes 3 coordinates now.
I think that's the role WebGL essentially fulfills. It was just done in a way that uses OpenGL instead of reinventing the wheel, since a 3D rendering engine is a non-trivial thing to build. In other words, you can do either:
var ctx = myCanvas.getContext('2d');
// ctx has the 2d api
var ctx = myCanvas.getContext('webgl');
// ctx has the webgl 3d api
> I can see a third party library that does 3D but projects it down to 2D and uses regular 2D commands on the canvas.
I think it would be a lot slower. Either way there's a 3D rendering engine. Embedding the engine natively and having GPU speedups is a dominating factor.
> or I see supporting WebGL interface and writing OpenGL style code.
I think the suggestions of using a lib to get higher level stuff are good ones. For simple enough stuff, d3, or more complex whole game engines are available.
It's a good question. I have an entire bookmark folder of "cool WebGL demos" collected over the years, but have seen very few full-fledged applications.
One way you could hedge the WebGL vs. Canvas bet is to use something like THREE.js, where you use THREE's cameras, lighting, etc and then specify whether it should use WebGL or Canvas for rendering. But then again, you'd be betting on THREE.js :)
WebGL is not just for 3D: it can be used for 2D games, where it is way faster than ordinary canvas2d, and allows use of shader effects and such. Major web game engines like Construct 2 [disclaimer: I am developer] use WebGL rendering with effects, and fall back to canvas2d if WebGL is not supported. So at least with Construct 2, it's been in commercial production use for a couple of years already.
It's a pretty bright future for WebGL. Lots of companies are doing 3D visualization on the web, including the one I work for. Our product includes an embedded web based architectural model 3D viewer (written by myself and one other dev).
So there are definitely companies making serious use of it.
Interesting producr. I've been looking into BIM integration for the FM software suite I work on, and the possibility of embedding a webgl BIM viewer to match our existing SVG-based CAD viewer. It seemed like a lot of work though to get it running properly (it seemed like one of those things where you can get a demo going in a matter of hours but finishing it up takes months).
You are correct. One of our devs built a prototype using a 3rd party WebGL framework quickly (few days), but for the real deal, we started over and went straight to WebGL ourselves (so we could better optimize for large models, etc). The prototype that was built just couldn't handle models of any meaningful size, so we definitely needed to own everything about the code so we could optimize for our needs.
We are in the BIM space and have one of the few web based viewers for such models. Some other products require a plugin to render 3D, but we really wanted to reduce friction to use our product, so we decided to go with WebGL. It's now supported in all 3 major browsers too.
That depends. At ProPublica we've done a bunch of visualizations with canvas and SVG in the past, and they are powerful tools. However, as part of our investigation into Hurricane Sandy flooding we knew we wanted to make a 3D visualization of NYC of show the storm surge:
If your data is inherently 3 dimensional I'd say it is worth the effort to dive in, but the majority of visualizations that we do are charts and plots which don't really need to be 3D.
There's a WebGL2 spec in the works, but not sure if it's based on OpenGL ES 3.0 or 4.0/Next. Since it's taking so long to finish it, I'm hoping it's the latter, since 3.0 didn't bring that many improvements, and ES 4.0/Next/whatever they want to call it is supposed to launch soon, I think.
Ok so if it is using canvas directly why bother calling it WebGL and why does it not run on some machine on Linux based on availability of certain graphics drivers?
If it was based on canvas and using that directly I would expect it to be an add-on library that takes a 3D scene for example and translates it to a 2D canvas commands.
Because it's using the graphics card to convert the 3D instructions into 2D equivalents in real-time. You can't calculate that stuff ahead of time if you want to allow the user to manipulate the canvas. And the whole reason we have graphics cards in the first place is because doing that on the CPU is too slow.
Well I've seen some demos and games of people using a 2D canvas to show a 3D scene.
Back in the day computers didn't have 3D accelerated cards, it was all done on the CPU and there were enough 3D games that ran on them.
I can also see canvas API extended to have 3D operations on it.
It just seems WebGL is (was?) popular, lots of demos, but I still haven't seen too many industry uses of it and I have heard people at work claim "it is dead". So I was just wondering, ok, if it is dead is there any hope of having 3D accelerated graphics in the browser or is there something else replacing it.
* It seems a sibling comment answered my question:
Author here. The water effect is a simple 2D heightfield simulation, which is explained pretty well at http://www.matthiasmueller.info/talks/GDC2008.pdf. It essentially just moves each vertex toward the average height of its neighbors, which turns out to propagate waves that look like water ripples.
The caustics are also pretty simple. Basically you take your heightfield mesh that represents the water surface and project each vertex independently along the ray refracted from the light through that vertex (using the vertex normal) and onto the pool floor. So you now have a mesh that is completely on the pool floor and contains lots of tiny triangles. To render caustics, just make triangles that got smaller brighter and ones that got bigger dimmer. I think I used the ratio of the projected area to the original resting area.
The reflection and refraction raytracing is all hard-coded for the geometry in the scene, which makes it really easy. It's just a simple sphere and box intersection test. The "ambient occlusion" is done by making parts of the objects darker when they get near each other.
State of the art for real-time water uses a full 3D volumetric representation for the water like in this paper: http://www.matthiasmueller.info/publications/tallcells.pdf. This lets you get waves that can fold over themselves like real waves. I haven't seen any other realtime methods that have caustics as good as the ones in my demo though.
> It essentially just moves each vertex toward the average height of its neighbors, which turns out to propagate waves that look like water ripples.
If you look at the equations that govern the liquid surface, the assumptions that this model is based on are not that bad. Reality is a bit more complex, but the average-of-neighbors seems like a good start. Pretty simple too, which is always good computationally.
> The reflection and refraction raytracing is all hard-coded for the geometry in the scene, which makes it really easy.
Which reminds me...
Back in the days of 386 CPUs, I did a real-time 3D ray-tracer that drew a few spheres rotating around each other, with a fixed light source, and correct illumination depending on incidence angle at any point on the spheres. All on a 30 MHz 386 CPU, nothing pre-rendered, no assembly code, no GPU, running at high frame rate. Bricks where shat when people saw it.
The reality was that the math was highly optimized for that specific scene. I massaged the equations until I worked out of them all the expensive functions. No sin() or cos(). I think the worst thing I had was a sqrt(), and even that was used sparingly.
> The "ambient occlusion" is done by making parts of the objects darker when they get near each other.
You're basically modelling Le Sage's theory of gravitation, with surface illumination representing "pressure" from the "corpuscles".
I implemented a basic water simulation a very long time ago. The water was approximated using a matrix of vertices where each vertex behaved like a spring and affected its neighboring vertices. So for example clicking a spot in the water would pull the closest vertex down and it would spring back creating a domino affect with its neighbors, their neighbors' neighbors, etc. It looks like this is using a similar technique which is why you occasionally see those steep "spikes".
This is incredible. Only nitpick: If you dump the ball slowly, it gets a weird water spike on top of it and the water surface looks like some gel, not like water. If you pull it out, the ball should be wet. That'd be awesome :D
In addition to the water spike, I think the lack of a "splash" when the water hits the surface makes it feel off. I wonder how feasible it is to get this part of the fluid mechanics right, without the whole thing just crashing and burning or all janked up...
Some of the interfacing with the sphere is a little off, like if you drag it in and out of the water rapidly it does nothing. And if you slowly drag it out, the water clings to it a little too long.
WebGL is not supported by any mobile browser yet (http://caniuse.com/webgl)
(edit: it seems that Blackberry Browser and Opera Mobile support WebGL. My bad, but it is not possible to use these browsers on iOS)
If you can read caniuse's table, you'll see that WebGL's actually supported on mobiles. You can get a more quantitative view from http://webglstats.com/ if you'd like.
Blackberry has been pretty active in the WebGL community. I was actually surprised and impressed with the performance the first time I saw a demo on the Playbook!
WebGL works in all browsers on desktop and on most smartphones/tablets - except on iOS platform (Safari, Chrome) as Apple has disabled the support [1].
I tried to dig into the source to see, but can anyone tell what algorithm they are using to propagate the ripples? Normally this would be a Finite Difference Time Domain http://en.wikipedia.org/wiki/Finite-difference_time-domain_m... but I'm wondering if they didn't just use growing circles and a sine wave.
seen a lot of these, but this is actually interesting for a change... not just a copy of something we did 15 years ago on ancient hardware.
the simulation using renderbuffers and float textures is cool - although the method used is ancient unexciting technology (see the water effect in winamp avs which uses the same trick in software 15 years ago) it makes sense from the webdev perspective of 'javascript is damned slow' - infact probably far too slow to do even attempt to do this CPU side.
the caustics are much more interesting though - firstly because i didn't know the trick already or re-invent it during my career or even know about the OpenGL ES extension that makes it practical (which is very very old on Desktop and very generally useful).
its always cool to see some raytracing logic applied with some cleverness applied to get some more mileage out of it at real time...
Is it the same version as the one from 2011 or was it updated recently ? Because it's a [very cool but] old demo, yet it's popping up in every feed in the past days.
It's amazing the performance difference with WebGL on Chrome vs Firefox. It lagged to like 5-10 fps with Chrome, but am getting 60+ it seems with Firefox.
Apple still doesn't have WebGL turned on by default in Safari. You can turn it on by showing the 'Develop' menu (Preferences -> Advanced -> Show Develop menu), and checking the 'Enable WebGL' option inside.
But it is. This post by Nathan de Vries [0] shows how you can hack the UIWebView to enable WebGL contexts in your iOS apps. You have to get a little more creative these days (the compiler has gotten stricter about private types and functions), but all it takes is something like this:
It should go without saying that code like this will not make it into the app store.[0] http://atnan.com/blog/2011/11/03/enabling-and-using-webgl-on...