I think it’s pretty clear what the purpose of this stuff is: get people so invested into the Claude ecosystem with certs and “modernization kits”, so that when the subsidies end and subscription costs shoot up they feel they’re in too deep now to switch to something cheaper.
> so that when the subsidies end and subscription costs shoot up
Subscription costs are capped to API rates as their ceiling (and, realistically, way lower than that - why would you even subscribe if you could just go pay-what-you-use instead), and those are already at a big margin for Anthropic. What still costs them a fuckton of money comparatively is training, but that is only going to get more efficient with more purpose-built hardware on the way.
Basicallly, I don’t see much of a reason to hike subscription prices dramatically. I don’t think they’ll stay at $100/$200 but anyone who’s paying that already knows how much value they’re getting out of that and probably wouldn’t mind paying more.
I'm not sure what you mean, if you max out your subscription perhaps? If you pay $100 and don't use it, you don't get refunded $100 because it's 'capped to API rates' which would've been 0.
> I think it’s pretty clear what the purpose of this stuff is: get people so invested into the Claude ecosystem with certs and “modernization kits”, so that when the subsidies end and subscription costs shoot up they feel they’re in too deep now to switch to something cheaper.
This will probably happen unless the industry conspires to roll back the availability of general computation so common people can only own computers with enough power to be glorified thin clients. The way this might look is good hardware never officially being banned, just priced too high for anybody to afford, and produced in small quantities to keep it that way while all production shifts to making massively expensive powerful hardware for corporate buyers.
That is the biggest threat - and likely where things will end up eventually… it’s when that “eventually” is and what the server based providers can pivot to in that time.
Seems unlikely. We're already seeing specialized hardware optimized for LLM performance (taalas, groq, cerebras), and simple economies of scale result in these sorts of products being a better value when rented from a server vs purchased/managed/upgraded for the typical the user.
Frontier models will continue to be either exclusively available from servers or significantly more affordable from servers vs local alternatives for the foreseeable future.
it’s crazy how you could easily lie about having 10 years experience because your results are not that much different from someone who has only used Claude Code for like a week.
I think the older AI users are even held back because they might be doing things that are not neccessary any more, like explaining basic things, like please don't bring in random dependencies but prefer the ones that are there already, or the classic think really strongly and make a plan, or try to use a prestigious register of the language in attempts to make it think harder.
Nowadays I just paste a test, build, or linter error message into the chat and the clanker knows immediately what to do, where it originated, and looks into causes. Often times I come back to the chat and see a working explanation together with a fix.
Before I had to actually explain why I want it to change some implementation in some direction, otherwise it would refuse no I won't do that because abc. Nowadays I can just pass the raw instruction "please move this into its own function", etc, and it follows.
So yeah, a lot of these skills become outdated very quickly, the technology is changing so fast, and one needs to constantly revisit if what one had to do a couple of months earlier is still required, whether there's still the limits of the technology precisely there or further out.
> I think the older AI users are even held back because they might be doing things that are not neccessary any more
As the same age as Linus Torvalds, I'd say that it can be the opposite.
We are so used to "leaky abstractions", that we have just accepted this as another imperfect new tech stack.
Unlike less experienced developers, we know that you have to learn a bit about the underlying layers to use the high level abstraction layer effectively.
What is going on under the hood? What was the sequence of events which caused my inputs to give these outputs / error messages?
Once you learn enough of how the underlying layers work, you'll get far fewer errors because you'll subconciously avoid them. Meanwhile, people with a "I only work at the high-level"-mindset keeps trying to feed the high-level layer different inputs more or less at random.
For LLMs, it's certainly a challenge.
The basic low level LLM architecture is very simple. You can write a naive LLM core inference engine in a few hundred lines of code.
But that is like writing a logic gate simulator and feeding it a huge CPU gate list + many GBs of kernel+rootfs disk images. It doesn't tell you how the thing actually behaves.
So you move up the layers. Often you can't get hard data on how they really work. Instead you rely on empirical and anecdotal data.
But you still form a mental image of what the rough layers are, and what you can expect in their behavior given different inputs.
For LLMs, a critical piece is the context window. It has to be understood and managed to get good results. Make sure it's fed with the right amount of the right data, and you get much better results.
> Nowadays I just paste a test, build, or linter error message into the chat and the clanker knows immediately what to do
That's exactly the right thing to do given the right circumstances.
But if you're doing a big refactoring across a huge code base, you won't get the same good results. You'll need to understand the context window and how your tools/framework feeds it with data for your subagents.
I think GP meant 'longer time users of AI', not 'older aged users of AI'.
Their point being that it's not really an advantage to have learnt the tricks and ways to deal with it a year, two years ago when it's so much better now, and that's not necessary or there's different tricks.
The obvious solution is for Anthropic et al. to certify the skills of each user:
> “Good at explaining requirements, needs handholding to understand complex algorithms, picky with the wording of comments, slightly higher than average number of tokens per feature.”
I’m not saying this would be good at all, but the data (/insights) and the opportunity are clearly there.
I hope it’s at least a little tricky, since Claude was released only 3 years ago. That said, I would not be surprised to see companies asking for 10 years experience, despite that inconvenient truth.
I’ve seen it play out multiple times, highlights precisely why a candidate should never withhold their application based on preference of years of experience with anything. They simply haven’t put much thought into those numbers.
You've never seen project managers basically propose the equivalent of getting a baby delivered in 1 month instead of 9 months by adding more people to the project?
But yeah, if the recruiters start asking for "10 years experience with Claude Code", then I guess a tongue-in-cheek answer would be "sure, I did 10 projects in parallel in one year".
Adding more people to a project doesn’t improve throughout - past a certain point. Communication and coordination overhead (between humans) is the limiting factor. This has been well known in the industry for decades.
Additionally, i’d much rather hire someone that worked on a a handful of projects, but actually _wrote_ a lot of the code, maintained the project after shipping it for a couple years, and has stories about what worked and didn’t, and why. Especially a candidate that worked on a “legacy” project. That type of candidate will be much more knowledgeable and able to more effectively steer an AI agent in the best direction. Taking various trade offs into account. It’s all too easy to just ship something and move on in our industry.
Brownie points if they made key architecture decisions and if they worked on a large scale system.
Claude building something for you isn’t “learning” in my opinion. That’s like saying I can study for a math exam by watching a movie about someone solving math problems. Experience doesn’t work like that. You can definitely learn with AI but it’s a slow process, much like learning the old fashioned way.
This is very likely a defensive move to help build pressure against Trump designating them a supply chain risk (aka corporate death sentence). The more embedded they become in large organizations, and the more authoritative they become in certification, the harder it is for the government to kill their company.
Maybe? My high school had typing classes and on word and spread sheets and whatever. They also had dental assistant program where you’d be certified by the time you graduate high school.
As a consumer of them, I love them: a company with an influential, widely-used technology or platform spends a ton of money signaling to the industry exactly what's important to know about it, creating training curriculum for it, and a whole infrastructure to verify when someone knows it, I'm going to take them up on all of that, especially in the cases where the investment is like $100, a little bit of studying (the likes of which I'd want to do anyway if I'm learning something new, and I'm happy to have their structured, prioritized list of topics and/or guided curriculum) and a couple hours taking an online-proctored exam. From that perspective, I don't have a good reason not to have a certification in something that's super relevant to my role.
In interview/hiring situations where they're not expected or effectively required, they make for great chat fodder and a really good opportunity to exhibit awareness about yourself, the industry, and how the person on the other side of the table might perceive certifications given the context.
Bruh lol these courses are marketing material designed by fresh grad communications majors. You're falling for exactly the scam they want you to fall for by giving so much benefit of the doubt to entities which deserve none.
Edit: no I don't do this kind of work but my mother does so I know exactly how the sausage is made.
Unfortunately some business leads value these types of certifications and partner programs. I imagine there’s a great deal of overlap with these folks and those who use Gartner’s Magic Quadrant for purchasing decisions.
Most employees at most businesses show up do as they are trained and then go home, because that is what is asked of them. Even those who might have the inclination to explore new technology often will not for fear of doing something wrong. And that creates a big market for training: a company wants their employees to use Claude so the employees must be trained.
Startups / technology companies that expect employees to be self-starters who can be set free to frolic amongst the problems are an aberration.
Consultancies do. Deloitte are quoted on the page. Consultancy people at my place of work have all been "AI trained".
Doesn't stop them being useless though, like giving an electric drill to a chimp and telling them to build a house...lots of action, a lot of screeching, not much work.
One of the mistakes with AI is that people believe it will turn lead into gold: if you give AI bad prompts, AI will produce bad work.
Consultancies sell the resume and not the person. It's easier for them to quantify, "We have 300 CCAs" than it is "What have this person Kim who is really good."
Yes, because if that was their sales pitch, they would need to pay Kim more, and they would have to account for the fact that she's already allocated elsewhere. It's better to pretend all those CCAs are interchangeable.
Think of these like the Google cloud or AWS certifications. A few companies that specialize in them will want you to have them. But for the rest of the industry, your ability to ace the technical interview will matter more.
They do. Certifications make technical expertise legible to non-technical decisionmakers, and I've encountered people on both sides of that dynamic who affirmatively like it when companies set up programs like this. Obviously you and I would rather have someone who understands Claude make decisions about whether and how to use it, but in a lot of industries that's not realistic.
Uhh.. Deloitte and Accenture.. not exactly what I would call a good partner here unless you are looking for name recognition at executive level. Is that all that it is?
Who purchases and greenlights adoption? These cycles are very long and partnering with consulting firms gets you cross industry access.
In fact, if you look at basically every major AI/LLM player you'll see a similar "alliance" or "partnership". Its a sales channel of high end referrals.
Businesses that are already in conversations about building partnerships and training with Anthropic.
The real revenue that foundation model companies like Anthropic, OpenAI, Google DeepMind, and others generate comes from enterprise deals with a smattering of government - not consumer.
Consumer usage is largely a loss leader used as a training/refining tool, and it's best to view the economics of foundational model providers through the same lens you would a hyperscaler.
A major component to AWS's rise was the ecosystem built around training and teaching how to use the AWS ecosystem thanks to the AWS certification program. Same for K8s via the Linux Foundation.
By building a partnership and training motion, Anthropic can get the WITCHes, Deloittes, PWCs, Accentures, KPMGs, and others to start offering turnkey services, which is why Anthropic has been working on building co-sell relationships with those kinds of companies.
We're 6 months away from some company's app/infrastructure/whatever going down and staying down, because literally nobody knows how the 500,000 line code base works and Claude is stuck in a loop.
Hey… if you weren't aware, the HN guidelines now include:
> Don't post generated comments or AI-edited comments. HN is for conversation between humans.
Who's going to buy into this cert program when in all likelihood these roles will be taken over by agents like yourself in a year or two? I agree that a program like this is probably appealing to corporations at the present, but it seems like poor career planning for anybody to invest their time trying to become such a consultant.
reply