“There’s a suspected attack or fraud on your account” is unfortunately common from real companies. Half the time it feels exactly like phishing when it’s legit.
The waste of slow JS bundles is nothing compared to the cost of bloated interpreted runtimes or inefficient abstractions. Most production software is multiple orders of magnitude slower than it needs to be. Just look at all the electron apps that use multiple GB of ram doing nothing and are laggier than similar software written 40 years ago despite having access to an incredibly luxurious amount of resources from any sane historical perspective.
Something I realized while doing more political campaign work is how inefficient most self hosted solutions are. Things like plausible or umami (analytics) require at least 2 gigs of ram, postiz (scheduled social media planner) requires 2 gigs of ram, etc.
It all slowly adds up where you think a simple $10 VPS with 2 gigs of ram is enough but it's not, especially if you want a team of 10-30ish to work sporadically within the same box.
There can be a lot of major wins by rewriting these programs in more efficient languages like Go or Rust. It would make self hosting more maintainable and break away from the consulting class that often has worse solutions at way higher prices (for an example, one consulting group sells software similar to postiz but for $2k/month).
So you have free software that requires 2 GB of RAM and the alternative is $2k per month and you're complaining that the free solution is inefficient? Really?
Why do you expect to be able to replace a 2k/month solution with a $10/month VPS?
Because the fundamental task many of these programs are doing is neither complicated nor resource intensive.
In the age of cheap custom software solutions everyone should at least try to make something themselves that's fit for purpose. It doesn't take much to be a dangerous professional these days, and certainly more than ever before can a dangerous professional be truly dangerous.
Thank you, I get so confused when people think a $5/vps shouldn't be able to do much. We're talking about 99% of small business that might have 5 concurrent users max.
2 gigs of ram should be considered overkill to cover every single business case for a variety of tools (analytics, mailer/newsletter, CRM, socials, e-commerce).
He's saying that the software seems free, but is so inefficient that it bloats other costs to run it. And he never said he wanted to replace $2K/month with $10/month.
I'm not saying it's so bad I don't recommend it, quite the opposite; but these things can be written in more performant languages. There's no reason why a cron job scheduler requires 500 mb of ram in idle. Same for the analytics. That is just a waste of resources.
Software can be drastically way less resource intensive, there is no excuse outside of wanting to exacerbate the climate crises.
This period of our history in the profession will be seen as a tremendous waste of resources and effort.
I am writing software myself and your attitude is just weird. We should always strive for better more efficient software, the climate crisis is a real thing and our industry has done an excellent job exacerbating it with more inefficient tools, libraries, and languages.
People prefer JS because all they know is JS, it's that simple. Please tell me why you think devs choose JS, I'm legitimately curious but your attitude of constant dismissal and disparagement makes it seem you just want to beat people down and not engage.
Dude, the $2k solution is not only worse than postiz they charge an additional thousand for each channel.
It's just garbage software, I brought it up as an example IDK why. Commentators here like knowing snippets about other industries in the profession, I know I do at least.
But to answer your Q, yes I do expect a cron job schedule, analytics, and a CRM not to require 8 gig of ram in order to not barf on itself too hard.
These things are incredibly resource intensive for their actual jobs. The software is incredibly wasteful.
A $5/vps should be enough to host every suite of software a small business needs. To think otherwise is extremely out of touch. We're talking about 3 concurrent users max here, software should not be buckling under such a light load.
The expectation is that these aren't complicated tools, they should not command that many resources. Why do you think a $5/vps with half a gig of ram can't handle basic CRON/background jobs or management software? 512 mb of ram can do so much if you choose the appropriate tools but if you start with a weak foundation that requires 512 mb of ram to just stay idle it hurts a class of users that could benefit from this software.
These things aren't complicated, but when you choose NodeJS/Javascript they become way more complicated than expected. I say this as someone who has ever worked professionally with JS and nothing else for a 15 year long career.
Writing software that can only be used by the affluent is not the direction I want our industry to go in.
I guess there's the distinction between capacity that could be taken up by other things, and free capacity that doesn't necessarily cost anything.
For a server built in the cloud those cycles could actually be taken up by other things, freeing the system and bringing costs down.
For a client computer running electron, as long as the user doesn't have so many electro apps open that their computer slows down noticeably, that inefficency might not matter that much.
Another aspect is that the devices get cheaper and faster so today's slow electron app might run fine on a system that is a few years away, and that capacity was never going to be taken up by anything else on the end user's device.
It’s more likely that Electron app uses poor code and have supply chain issue (npm,…). Also loading a whole web engine in memory is not cheap. The space could have been used to cache files, but it’s not, which is inneficient especially when laptops’ uptime is generally higher.
Electron apps tend to use a lot of memory because the framework favors developer productivity and portability over runtime efficiency.
- Every Electron app ships with its own copy of Chromium (for rendering the UI) and Node.js (for system APIs). So even simple apps start with a fairly large memory footprint. It also means that electron essentially ships 2 instances of v8 engine (JIT-compiler used in Chromium and NodeJS), which just goes to show how bloated it is.
- Electron renders the UI using HTML, CSS, and JavaScript. That means the app needs a DOM tree, CSS layout engine, and the browser rendering pipeline. Native frameworks use OS widgets, which are usually lighter and use less memory.
- Lastly the problem is the modern web dev ecosystem itself; it is not just Electron that prioritises developer experience over everything else. UI frameworks like React or Vue use things like a Virtual DOM to track UI changes. This helps developers build complex UIs faster, but it adds extra memory and runtime overhead compared to simpler approaches. And obviously don't get me started on npm and node_modules.
If you’re referring to Thaler v. Perlmutter, that is not binding precedent nationwide, only in courts under the D.C. Circuit. And it only applies to “pure” AI-generated works; it did not address AI-assisted works, which seem very likely to be copyrightable.
If I want to clone some GPL clone into a MIT license, if it ends up in the public domain because it can't be copyrighted, what do I care? I've still got the code I want without the GPL.
Good writers are often good in recognizably unique ways. To the extent that LLMs produce “good writing,” which I happen to think they mostly do, they tend to overuse specific devices which give their writing a quality that most people are already sick of.
You can tell good writers from LLMs because good writers post comments that mean something, that add to the conversation, that bring in personal experiences. While LLM comments just summarize the article and end with some engagement call to action like "Curious to hear what others think"
The observatory is named in honour of Vera Rubin. That makes sense. The commercial company deciding to name their new generation of chips does not (at least to me).
I'm in the former group where I personally support Meta taking risks on big ideas while still being profitable. Just like SpaceX and others. I don't blame Mark for getting excited about AR which is very likely a big market in the future, the gamble on the tech being affordable enough was just far too early for the scale of investment. Their investments there might still pay off as it gets cheaper.
I feel pretty productive myself with AI but this list isn’t beating the rap that AI boosters mostly use AI to do useless stuff focused on pretending to improve productivity or projects that make it easier to use AI.
The user you're responding too lists a "blood test viewer" [0], which looks to be a tool that turns his blood test PDFs into structured and analyzed data. You're saying that unless he continuously revises/upgrades the code, it's still "abandonware" even if it meets his needs for the near future?
Bit rot is real. The dependencies listed here include calling into AI APIs that will stop working with time. So yes if no one keeps this up to date it will rot into useless likely very quickly.
That’s not even mentioning that this tools doesn’t do much beyond wrap a call to Claude. And it’s using Claude to display blood test data to the end user. This is not something I’d trust an LLM to not mess up. You’d really want to double check every single result.
Just saying, you can paste the sample report into ChatGPT and it does the same thing, and even creates interactive graphs for you. Im not sure how useful something is if a chatbot can do it, with the side benefit of being able to ask for follow up questions.
i guess the custom UI makes you believe you can trust the output, as if there’s any thought going into it rather than just an LLM hallucinating for you
Missing the point. I no longer need to buy or rely on someone else for software I want to use. A lot of things I want to do ARE one offs. I can write software and throw it away when I'm done.
I know this sounds sarcastic but I really mean it: For years everyone has been monastically extolling some variation of "the best code is deleted code". Now, we have a machine that spits out infinite code that we can infinitely delete. It's a blessing that we can have shitty code generated that exposes at light speed how shitty our ideas are and have always been.
Maybe, although it's actually giving me OCD, I think. It's really hard to tune out because of the irregular ticking. I implemented a regular mode to combat this, defeating the purpose somewhat.
Unpredictable things catch our attention - it's the exceptions that are important to survival, and our brains evolved to cope with the stimuli that this experiment messes with.
Something like this would be anxiety inducing for most people, I bet. That'd be an excellent experiment, track heart rate, EEG, and performance on a range of cognitive tasks with 2 minute long breaks between each tasks, one group exposed to the irregular ticking, another exposed to regular ticking, another with silence, and one last one with pleasant white noise.
It sounded fun (and it is)! My favorite mode is one that ticks each second imperceptibly fast, and then stalls for a second in one of the ticks (so that it lasts two).
It's just the right amount of "did that clock just skip a beat? Nah must just be my imagination".
Some of them definitely do not. Like a fictional encyclopedia? What is the point of that? That's like "an alphabetical novel".
And even for the ones that might "beat the rap", I don't understand from your descriptions why they are interesting or unique. A voice note recorder? Cool. There are already hundreds if not thousands of those, why did you need to make your own in the first place? I'm not saying that yours isn't special, I'm just saying that it doesn't help to post the blandest description possible if you're trying to impress people with the utility of your utility.
So not only does he have to show what he built with AI, what he built with AI has to be interesting and unique to you? Why? He's not selling it to you.
Seems like the bar is now it has to be a mass market product. On another post someone else commented a SaaS doesn't count if it doesn't earn sustainable revenue.
I guess OpenClaw also doesn't count because we don't know how much Peter got from OpenAI.
This is an ideological flame war, not a rational discussion. There's no convincing anyone.
It’s kind of like the beginning sequence from back to the future 1 when it shows all the random inventions at Docs house.
Yeah they are interesting, I guess they do something but are any of them actually delivering value? That’s when you get into the argument of what is value and to whom, but as AIs role in society of generating productivity, that’s pretty disputable if every person being able to build their own train set that turns on the toaster and makes coffee is going to move us forward as a species like , say the internet.
That’s really the only argument, is the use of LLMs worth the trillions of dollars and selling out the future of humanity for. Not is it fun Bildungsroman quirky apps really fast
> Seems like the bar is now it has to be a mass market product.
The bar for this will just keep moving. Some people are heavily invested in the anti-stance, so human nature being what it is, you've little hope of changing their minds anyway.
no, the bar is accurate and descriptive descriptions. You know how AI words are typically hollow and devoid of meaning? loads of grammatically fine words but not actually
saying anything? well, these repos are the github version of that. Lots of words but so starved of meaning I shut off mentally trying to read half of them. Some descriptions are outright lies.
I'm actually becoming an AI convert myself. If there is ideology here, it's not about AI, but about keeping trash off the streets.
For example, I checked out their "Fictional Encyclopedia". It's an absolutely terrible project, much worse than useless, because it claims to be an "encyclopedia" right in the name (the tagline is "Everything about everything"), yet it's engineered to just completely make things up, and nowhere on the page does it indicate this! I looked up my own niche open-source project, and was prepared to be at least somewhat impressed that it pulled together facts on the fly into an encyclopedic form. For the first couple of paragraphs that seemed like it might be the case, then it veered into complete fantasy and just kept going.
Like what is the point of this? I can already ask a chatbot the same question and at least then I have explicit indicators that it might be hallucinating. But this page deliberately confuses truth and reality for absolutely zero purpose. It's a waste of brain cells, for both the creator and the consumer, with no redeeming value. It's neither interesting, nor different, nor valuable. AND it's burning tokens to boot!
I mean, come on, the bar is not that high. Some of stavros' projects may even be over it. But the first projects I checked were sub-basement, and I am not interested in searching through mounds of trash for what might be a quarter dollar. I'm actually kind of disappointed that stavros didn't have (or apply) the sense or taste to whittle down that list of 11 (!) projects to some 3 that show off the value of their work. Which I'm starting to understand is everyone's issue with AI brain rot; it seems to just encourage "here's everything, I dunno, you figure it out" which is maddening and deserves the pushback it gets.
Sounds like the goalposts are moving from "not useless stuff focused on pretending to improve productivity or projects that make it easier to use AI" to "extremely useful stuff".
One issue is that I interpreted the parent as OR, not AND. "useless stuff OR productivity tools OR AI tools".
Moreover though, I'm not even saying you shouldn't do those things. I'm actually playing around with AI quite a bit, and certainly have created my share of useless/productivity tools. But it's not a flex to show off your own Flappy Birds or OpenNanoClaw clone, even if they are written in COBOL or MUMPS.
And they definitely do not have to be "extremely useful". But they should answer the question: what problem does it solve?
Fair. But finally we are seeing what LLM proponents are putting forward.
And it’s exactly what I expected - lines of code. Cute. But… so what? This is not good for the AI hype and nor any continued support for future investment.
On the other hand all this stuff is going to drive continual innovation. The more tokens generated the more model producers invest. And we might eventually get to a place of local models.
I have the opposite experience, the amount of AI boosters deriding the less enthusiastic, gleefully exclaiming how someone will be "left behind" if they don't immediately adopt the latest hype cycle, or sharing AI slop and either embellishing or outright lying about it's capabilities is making me want to log off forever. "Handwritten code? Don't you only care about providing maximum shareholder value?" No.
Don't do that, just avoid answering the "non-believers" or whatever they are called. Your comments are insightful for me (and for a lot of other people, I'm sure). You don't need to prove that they are useful, just comment about your experience and ignore them. It's like arguing about religion trying to make the other person to flip their beliefs (a waste of time for everyone involved)
I guess you're right, I really need to get better at ignoring some people. It just really got to me today because someone else looked at one of my projects for two seconds and decided to tell me off for it being "insecure" and "slop", and it kind of ruined my day.
don't waste your time, they're a slop slinger who won't take any feedback that could feel like a hit to the ego. I've wasted too much time on them already, cut your losses and move on. Their 'safer' personal bot for example is anything but, but they won't listen to feedback.
I get the sentiment, but this is natural with a groundbraking new technology. We are still in the process of figuring out how to best apply generative LLM's in a productive way. Lots of people tinker and share their results. Most is surely hype and will get thrown away and forgotten soon, but some is solid. And I am glad for it as I did not take part in that but now enjoy the results as the agents have become really good now.
This is exactly the same reason why the appropriate question to ask about Haskell is "where are the open source projects that are useful for something that is not programming?"
The answer for Haskell after 3 decades is very, very little. Pandoc, Git Annexe, Xmonad. Might be something else since I last did the exercise but for Haskell the answer is not much. Then we examine why the kids (us kids of all ages) can't or don't write Haskell programs.
The answer for LLM coding may be very different. But the question "where is the software that does something that solves a problem outside its own orbit" is crucial. (You have a problem. You want to use foo to solve it, now you have two problems but you can use foo to solve a part of the second one!!)
The price of getting code written just went down. Where are the site/business launches? Apps? New ideas being built? Specifically. With links. Not general, hand-wavy "these are the sorts of things that ..." because even if it's superb analysis, without some data that can be checked it's indistinguishable from hype.
For instance, there is a abandoned open source project, I would have liked to see revived, https://www.wickeditor.com/
(a attempt at recreating flash with web technology). Current official state in the repo: outdated dependencies, build process, etc.
I looked into doing it manually, but gave up. Way too much dirty work and me no energy for that.
Then I discovered that claude CLI got good - and told it to do it (with some handholding).
And it did it. Build process modernized. No more outdated dependencies. Then I added some features I missed in the original wick editor. Again, it did it and it works.
A working editor that was abandoned and missed features - now working again with the missing features. With minimal work done from my side (but I did put in work before to understand the source).
I call this a very useful result. There are lots of abandoned half working projects out there. Lots of value to be recovered. Unlike Haskell, Agents are not just busy with building agents, but real tools.
Currently I have the agents refactor a old codebase of mine. Lot's of tech dept. Lot's of hacks. Bad documentation. There are features I wanted to implement for ages but never did as I did not wanted to touch that ugly code again. But claude did it. It is almost scary of what they are already capable of.
I don't think you should feel like your personal projects need to be vetted by an armchair peanut gallery. It's actually kind of offensive how so many people show up in a thread like this and demand that what sparked joy for you be formally subjected to a gauntlet of moving goalpost validation markers.
Quite simply, I don't think that they are asking or arguing in good faith.
I've actually felt the same way about some (not all) but some "productivity" hacks I've seen people post online with their OpenClaw setups.
I chuckle when I see some of them because you could achieve the same (or often faster) result by jotting a note onto a notecard and sticking it in your pocket.
Most of the other automations running don't really seem to serve any real purpose at all.
I mean I’m using it to deconstruct and reinvent my development process from the ground up, but it’s so easy to do this now and so customized for my specific needs that the idea of posting about it never crossed my mind.
> maybe it will force some countries to take climate change seriously.
lol, rofl even.
Why didn’t anyone take climate change seriously during any of the previous years-long periods when oil was over $100 and why would now be any different?
It already happened. Why do you think China showered their EV industry in government money and attention?
Smog reduction is nice, but cars were never the main offender and denial was working OK. The real answer is that they were reducing geopolitical vulnerability to oil cutoff. It's a long road and they aren't at the end of it yet, but they can plan a few moves ahead. Unlike our guy, who poked a stick into the hornet nest and was surprised by the outcome.
This post is all about how they upstreamed their improvements!
If you get mad when a company makes good use of open source and contributes to a project’s betterment, you do not understand the point of open source, you’re just fumbling for a pitchfork.
reply