Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am ranting slightly out of the scope of my expertise, but I think that 'web bytecode' is putting lipstick on a pig. In fact, I think that the entire web stack is upside down, it is intended to serve a mostly static webpage, perhaps with a counter or a mouse-over effect. And it is doing this well. But to start with the browser, having a nice HTML parser, a DOM tree etc. is nice to have, sometimes. Similar with http, this is a stateless protocol, which again is nice, except if you want state. And on the other side of the connection is the webserver, which today is mostly a glorified front end for a database.

So every time I think about the web, my sense of software design is rebelling. It should just be constructed the other way around, with a nice VM on the client, that contains a browser if it is supposed to present structured hypertext. That communicates to a server over TCP/IP, without reinventing TCP atop of HTTP and a server that is actually tailored to whatever it is supposed to do. ( Before anybody accuses me of advocating Java, I want all of this nicely implemented.) But unfortunately, it is probably a billion users too late to start again from scratch.



I've often had this thought. But I keep coming back to this idea: HTML and CSS are actually a really nice way to describe an interface, for a few reasons:

1. Reusable styles plus case-by-case overrides--very important in a world where graphic design is so central to what we do. To me, CSS beats your typical GUI builder any day.

2. Flow. HTML/CSS have a sophisticated model for automatically flowing text and sizing elements to fit. The content below gets pushed down by the content above, automatically and as much as necessary. Other GUI models have similar things, but I prefer the web's take on it.

3. URLs. These are a beautiful concept. A string that uniquely identifies a certain resource, be that resource a document, a certain screen in an interactive interface, a record, whatever.

Now, you might say you could build all these things into your ideal client. But if you did, what would you have? Seems to me you'd have a web browser.

I do agree with your argument about the network protocols, though. We started with TCP, then we put HTTP on top of it to transfer documents. Then we started sending lots of HTTP requests back and forth for interactivity, and even having the client poll the server as a hack for pushing. Like you said, it started to look like TCP on top of HTTP on top of TCP.

But now we have websockets, which in my opinion is a pretty lean and mean protocol on top of TCP. I like to think of it as regular TCP with a few conveniences like negotiation. Granted, it is not in use everywhere, and I still use the TCP-on-top-of-HTTP-on-top-of-TCP approach for most of my stuff, mostly out of technical conservatism. But I think it'll get there.

As to the statelessness of regular HTTP, it still has its place. Sometimes stateless connections can be very beneficial. Imagine a site like Wikipedia. You wouldn't want everyone who's currently reading a page to have an open socket. With HTTP, you open the socket, you transfer the document, and you close the socket. Done. It's unfortunate that we have to open a new socket for every image, CSS file, etc., but I also expect that situation to improve over time.


Actually I agree, we could do a lot worse than HTML,CSS and JS on the client, and more general I think a lot of the development of the Internet are really fortunate accidents. But there are a lot of potentially useful services, which do not need a GUI, for example a rsync client. Or which would profit from a different metaphor than the newspaper with moving pictures metaphor HTML imposes, for example games.


Check out Atlantis/Embassies; they've proposed a very similar thing: http://research.microsoft.com/apps/pubs/default.aspx?id=1799... (not sure why they call a lightweight VM a "pico-datacenter", though)

There may be a way to incrementally approach this architecture by reimplementing existing functionality as modules that run inside some kind of VM and then deprecate the old hardcoded version.


> There may be a way to incrementally approach this architecture by reimplementing existing functionality as modules that run inside some kind of VM and then deprecate the old hardcoded version.

Hopefully, but if the history of computing is any indication: Layers of abstraction can only be added, not removed.


> But unfortunately, it is probably a billion users too late to start again from scratch.

A potentially interesting way around this is to make the client open source and get it shipped by all the Linux distributions. That would give application developers an initial audience which would help solve the chicken-and-egg problem.


> Before anybody accuses me of advocating Java, I want all of this nicely implemented.

So... I take it that in your opinion the reasons why Java (and to a lesser Flash) didn't supercede the web have to do with its implementation flaws, not its concept?


That would be my take anyway. A big issue with Java is that it required installing a huge standard library - that should just be downloaded as required. And flash was never really intended for applications; it was always about "rich content". And Air (the application framework built on flash) has the problem that it's proprietary so you have to trust Adobe.


I think that the evolution of 1998 hypertext to todays web apps makes sense, as in every step along the way was reasonable. And by the time that one could actually leverage the advantages of a VM concept, the browser was the entrenched incumbent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: