I don't know if that's true. It's important to consider that most of the state is US government owned (military, national park, national forest). A large portion of the rest is used for mining which is, I think, still the largest industry in the state, employing a formidable share of workers and fueling related industries (trucking, concrete, gravel, salt, etc.).
Combined, these put a strain on land as a resource, and solar is the one energy source that demands that same resource the most.
Leasing land for solar pays very little. The only reason people do it is because the land has no better use and solar doesn’t permanently damage it the way mining or farming could. Other industries aren’t being priced out.
The USA is one of the largest countries by landmass on the planet. We are not short on space anywhere in any capacity except immediately surrounding major cities.
I see people grow in two directions. Some grow upward, becoming manager, director, VP, etc. Others grow outward, branching into new technologies and disciplines, while remaining an IC.
Regardless of which direction you grow, I think give. enough time, the quality of your work will speak for itself.
I've watched too many people try to run the rat race of moving up the later. Staying at a job/role for only 18 months just to hop to the next thing. They lack depth in their area and eventually bottom out completely.
I started using "The Cloud" in 2012 to build apps. At the time they called it Google App Engine. In the last 13 years a lot has changed and I've used all the main cloud offerings.
I've been a part of many outages that originated on our cloud provider. And what I've learned is the best practices rarely save you. We pretend multi-az, global DBs and fail overs protect us, but they don't. They make us more scale able, but not more resilient.
These cloud services have great uptime generally, and they have become so centrally integral to the web that when they go down, everything goes with them
If a random person was hosting their own server somewhere, they would probably avoid the huge outages, but likely have more frequent smaller outages and nowhere near the reliable uptime that the big providers have, right?
MSK IAM support has long mystified me. I think they only supported Java for the first 9 months or so. Even then they still don't have GO or PHP support. It's not a ton of work, they're reusing request signer code anyways.
According to my teammate who actually wrote the C++ code for this, there are lack of documentations of how the AWS_MSK_IAM is supposed to work. He has to check the Java/Python implementation line by line to avoid those guesswork
Well, there's precedent for that since the $(aws eks get-token) is just a base64 pre-signed GetCallerIdentity URL but I don't think that's documented anywhere, either, but can be spotted by squinting at aws-iam-authenticator source
My suspicion is that if they didn't want to bother to write a C++ client, they for sure wouldn't have the empathy(?) to document how anyone else could, too. I said empathy but I kind of wonder if by publishing how something works they're committing to it, versus they're currently only one commit away from changing it in their clients, without having to notify anyone
I work at a popular Seattle tech company. and AI is being shoved down our throats by leadership. to the point it was made known they're tracking how much devs use AI and I've even been asked when I'm personally not using it more. and I've long been a believer in using the right tool for the right job. And sometimes it's AI, but not super often
I spent a lot of time trying to think about how we arrived here. where I work there are a lot of Senior Directors and SVPs who used to write code 10+ years ago. Who if you would ask them to build a little hack project they would have no idea where to start. And AI has given them back something they've lost because they can build something simple super quickly. But they fail to see that just because it accelerates their hack project, it won't accelerate someone who's an expert. i.e. AI might help a hobbyist plant a garden, but it wouldn't help a farmer squeeze out more yield.
> just because it accelerates their hack project, it won't accelerate someone who's an expert.
I would say that this is the wrong distinction. I'm an expert who's still in the code every day, and AI still accelerates my hack projects that I do in my spare time, but only to a point. When I hit 10k lines of code then code generation with chat models becomes substantially less useful (though autocomplete/Cursor-style advanced autocomplete retains its value).
I think the distinction that matters is the type of project being worked on. Greenfield stuff—whether a hobby project or a business project—can see real benefits from AI. But eventually the process of working on the code becomes far more about understanding the complex interactions between the dozens to hundreds of components that are already written than it is about getting a fresh chunk of code onto the screen. And AI models—even embedded in fancy tools like Cursor—are still objectively terrible at understanding the kinds of complex interactions between systems and subsystems that professional developers deal with day in and day out.
My experience has gotten better by focusing on documenting the system (with ai to speed up writing markdown). I find reasoning models quite good at understanding systems if you clearly tell them how it works. I think this creates a virtuous circle where I incrementally write much more documentation than I ever had the stomach for before. Of course this is still easier of you started greenfield buts allowed me to keep claude 3.7 in the game even as the code base is now 20k+ lines.
That's better than my past experience with hobby projects, but also nowhere near as big as the kinds of software systems I'm talking about professionally. The smallest code base I have ever worked on was >1M lines, the one I'm maintaining now is >5M.
I don't doubt that you can scale the models beyond 10K with strategies like this, but I haven't had any luck so far at the professional scales I have to deal with.
I've found claude-code good in a multi-million line project because it can navigate the filesystem like a human would.
You have to give it the right context and direction — like you would to a new junior dev — but then it can be very good. Eg.
> Implement a new API in `example/apis/new_api.rs` to do XYZ which interfaces with the system at `foo/bar/baz.proto` and use the similar APIs in `example/apis/*` as reference. Once you're done, build it by running `build new_api` and fix any type errors.
Without that context (eg. the example APIs) it would flail, but so would most human engineers.
Well I have also worked on systems of multiple millions of lines, well pre-llm, and I sure as he'll didn't actively understand every aspect of it. I understood deeply the area I work on and the contracts with my dependencies as well the contracts I provide. I also understand the overall architecture. We'll see how it goes if my project grows to that point, but I believe by clearing documenting those things, and overall focusing on low coupling, I can keep the workflow I have now, but with context loading for every session. Time will tell.
In general though, its been a lot of learning on how to make llms work for me, and I do wonder if people simply dismiss too quickly because they subconsciously don't want them to work. Also "llm" is too generic. Copilot with 4o sucks but claude in cursor and windsurf does not suck.
Cool anecdote, for me it has slowed me down 8x to 23x since I started using it in real projects with real customers in a real code base last year.
So 1-1 in pointless personal anecdotes.
Now show us the numbers! How did you measure this? Can u show 2x/5x increase in projects/orders/profits/stock price?
I'm not really sure I understand your counter argument. Pretty much everything about personal productivity is anecdotes because it's so uniquely tied to an individual. I showed you my numbers - I am 2x to 5x faster at delivering projects.
The point is that leadership gets to write on their own promo document / resume about how they "boosted developer productivity" by leading the charge on introducing AI dev processes to the company. Then they'll be long gone onto the next job before anybody actually knows what the result of it was, whether it actually boosted productivity or not, whether there were negative side-effects, etc.
Aye - this is a limitation of the current tech. For any project greater than 1k lines where the model was not pretrained on the code base…. AI is simply not useful beyond documentation search.
It’s easy to see this effect in any new project you start with AI, the first few pieces of functionality are easy to implement. Boilerplate gets written effortlessly. Then the ai can’t reason about the code and makes dumb mistakes.
pdfbox is just as good for 99.9999% of documents. We used to use iText and in the last renewal, they tried to 10x our yearly license cost to the point it would have been more expensive than our AWS bill. No thanks.
Beyond the fact that it is a total scam, it also creates a lot of animosity among employees. Because there is no set limit, heck there are rarely guidelines, people often feel it's used unfairly in their teams and org.
If there aren't guidelines thats where the problem lies. Management should be all over this. One size fits all. "Unlimited" is a nonsense term but shouldn't be taken literally and abuse needs to be called out.
I completely agree. I've worked at three places with unlimited PTO and none of them do. I think the moment you set that expectation the illusion of being unlimited goes away.
This is an opportunity to bring dollars and jobs to their state and the only reason they're resistant is to stick with party lines.