Hacker Newsnew | past | comments | ask | show | jobs | submit | computably's commentslogin

The census data you linked lists unemployment and underemployment for graduates aged 22-27. Assuming nontraditional graduates are a relatively small minority, that's a 5 year window after graduation.

I would find it believable, though not interesting, for only 11% of CS grads to have a local-median-pay, CS-related job locked in at graduation.


> And the similarities are striking. Now, I dont know whether the recommended novel is the training data, or its actually written by LLM. Or maybe its just how novelist writes.

For traditionally published works, it's trivial to exclude LLM-written content, just look for anything published before Nov 30, 2022.


Which is also a good filter for web searches to exclude a lot of garbage results (if the specific search makes sense for non-recent results)

Except many search engines have a recency bias.

A sane default previously; as news changes and the status quo also, but it makes you even more likely to encounter slop now.


Not sure how that changes the fact that you can filter by date range in searches where you don't actually need anything recent?

I think we are discussing the wrong problem here. I have no solution to offer, but I think the problem is not so much generated content, but the surroundings in which it can thrive and become the content you see everywhere.

If we hadn't removed the gatekeepers everywhere (and I know there are problems with them, too), then all that technology would not be able to do much harm.

It might also have to do with incentives. The incentives in our economy are not to help and advance society, the invisible hand nonwithstanding.


Why stop with traditionally published works? Before dead-internet-day, very-nearly all forms of writing were guaranteed to be hand crafted, organic, and made with 100% Natural Intelligence.

The artificial stuff often has an odd taste, but boy it sure is quick and convenient.


Don't you remember the endless SEO spam that swamped the Net even before GPT, allegedly written by real humans?

You joke, but I bet every person in this forum, when presented the choice between a bot-filled forum and a guaranteed human-only* forum, they'd go with the latter.

* this is a hypothetical scenario. I don't know any guaranteed human-only digital forums.


I converse enough with LLMs for research at this point where I feel I have a good enough structure to hop on/off them to primary sources and stuff, so I don't get annoyed with them too easily.

Whereas I haven't seriously reflected on my social media consumption habits for over 15 years, and over the years I'm getting more and more annoyed at social media.

Not to be a bit misanthropic, but there's something seriously wrong with my social media usage, especially when I know there's a real human on the other side, combined with ever increasing annoyance towards commenters and just the feelings I get after reading social media.

It may be dopamine / self-help related, but no actually, I think all of that is part of the issue (discovered that in high school when it was taking off). Something about the way I'm fundamentally interacting with the medium seems so horrible and icky the more I mature.


I agree with you, but as to your addendum:

Niche hobbyist forums are still safe, for now. There's just not enough commercial interest in petroleum lantern restoration to make it worth anyone's time to poison this particular well.

Even some larger niche hobbies like the saltwater aquarium community seemspretty safe for now (though it also helps that many forums have members who visit each other to trade corals and admire each others tanks).


On the contrary! The dead-day theorem established earlier states that an 11/22 date filter is a necessary condition for verifiable human-only content, when filtered by content-creation date.

A weaker theorem can be postulated that any such filter provides a second order sufficient condition.

This means we can filter content by account creation date, for example, by hiding all posts and comments from accounts created after the digital death event. This won’t always guarantee human-only content but certainly more than otherwise.

But then we wouldn’t be having this most definitively human-to-human conversation, right?


Is the ChatGPT launch the "low background steel" date for writing?

What's are the dates for images and video? Nano Banana Pro and Seedance 2.0?

And code? Opus 4.6?


It's not the launch of GPT, but probably about 4 or 4o that it really became solid. I also don't think video is there just yet, at least for video over 10 seconds.

Is it "solid" if people can read it and instantly know it's generated content?

No. But you can easily make and post content that is not easily detectable as generated.

You only notice plastic surgery when it's bad, but that doesn't mean all plastic surgery looks bad...


Who's "people"? The bottom X% (40%?) of the population is already falling for AI slop video scams, but before that, they were also falling for pig butchering and nigerian prince scams, so the "average" person benchmark has already been passed for text, photos, videos, etc. For more astute consumers, video isn't there yet.

There's also the question of whether people are even trying to disguise AI content, and how effective that disguise is. Are you or I missing the AI-generated text that just has a veneer of disguise on it?


>Who's "people"?

If you follow this thread up you will see the context is 'people who want to read content written by humans.'


why does it matter when it "became solid?" there was plenty of slop generated with ChatGPT, that really was the turning point (because of public access)

I'm pretty sure their point is Dropbox is better as personal storage than file sharing between users.

> A lot of people read things, it changes their life, and their life is better. They may not even remember where they read these things. They don't produce citations all of the time. That's totally fine, and normal. I don't see LLMs as being any different. If I write an article about making code better, and ChatGPT trains on it, and someone, somewhere, needs help, and ChatGPT helps them? Win, as far as I'm concerned. Even if I never know that it's happened. I already do not hear from every single person who reads my writing.

Not a contradiction but an addendum: plenty of creative pursuits are not about functional value, or at least not primarily. If somebody writes a seemingly genuine blog post about their family trauma, and I as the reader find out it's made-up bullshit, that's abhorrent to me, whether or not AI is involved. And I think it would be perfectly fair for writers who do create similar but genuine content to find it abhorrent that they must compete with genAI, that genAI will slurp up their words, and that genAI's mere existence casts doubt on their own authenticity. It's not about money or social utility, it's about human connection.


The consent question gets weirder when agents have persistent memory. I run agents that accumulate context over weeks — beliefs extracted from observations, relationships with other agents. At what point does an agent's memory become its own work product vs. derivative of its training? There's no legal framework for that.

The only thing the doomers have been right about so far is that there's always a user willing to use --dangerously-skip-permissions. But that prediction's far from unique to doomers.

And there's always a product provider who's willing to add that flag, despite all the warnings.

> Without going into the specifics of car seats, I do think we overemphasize safety. The article mentions saving 57 children. How much are 57 lives worth? The answer is not infinite - a life has a numeric value, ask any insurance company.

Sure, the value of 57 lives isn't infinite, but this particular comparison is a totally absurd one to make. Births and deaths are completely morally independent, it's not as if those 57 lives could be substituted using the surplus of births.

> Every safety regulation ought to pass a cold-blooded cost/benefit analysis. Few of them do.

Actually I'm pretty sure that is in fact how safety regulations work.

Nonetheless, the concept of a "cold-blooded cost/benefit analysis" is paradoxical. Values are intrinsically subjective, hence we have democracy.


>Actually I'm pretty sure that is in fact how safety regulations work.

Of course the number "check out". Industry regulations are typically ghost written by some combination of industry groups, lobbying groups and academia. Who funds those? The industry either being regulated or industry that stands to benefit if some other industry is regulated.

80-100yr ago if you were inclined to screech about fire safety you'd have been citing numbers funded by the.... wait for it.... asbestos industry.

>hence we have democracy.

Democracy is a system for ensuring stable-ish power transfers by giving the people some semblance of control over the process and little more.


> We search for outliers yet arbitrarily limit the range of players available.

> Gender segregation, weight classes, these are antithetical to the underlying aim of competitive sports.

That's a naive, reductive view. Competition isn't just about benchmarking and finding the global #1, nor perfect objective ranking. If it was, we would not bother with geographically-based competitions, nor tournament brackets and championships.

Competition is an entertainment product and a major form of community. It sustains itself through competitors and spectators. Seeking objectivity is backwards.


Agreed, and I think people adopt this reductive view because it can be quite difficult to reason about objectively. In terms of a framework to channel one's thinking on this, I found this paper useful in understanding the rationale behind defining distinct categories of competitors in sports: https://www.researchgate.net/profile/Jim-Parry/publication/3...

The key takeaway in my view is that the authors make a distinction between "category advantage", which is a systematic, structural, group-based difference that exists before competition even begins, and "competition advantage", which we see play out in competitive events and is based on a mix of factors including skill, preparation, and both innate and trained talent.

Where exactly to draw the line can be somewhat subjective (e.g. in weight classes) but it helps to explain why we have a separate female category: male physiology confers such a significant category advantage that, in open competition, it would limit the ability of female athletes to compete meaningfully and demonstrate their abilities. Having a separate category fulfils this desirable outcome of showcasing and celebrating female athletic excellence.

Often we see calls to add various classes of males, particularly ones who have chosen to identify as women, framed as "inclusion" but from the perspective of who this category is actually intended for it's the opposite. Drawing a clear eligibility boundary around the female category maximises inclusion of female athletes who would otherwise be disadvantaged and excluded.


> Yes, if you want skilled labour. But that's not at all what ARC-AGI attempts to test for: it's testing for general intelligence as possessed by anyone without a mental incapacity.

Humans without a clinically recognized mental disability are generally capable of some kind of skilled labor. The "general" part of intelligence is independent of, but sufficient for, any such special application.


What is possible today is one thing. Sure people debate the details, but at this point it's pretty uncontroversial that AI tooling is beneficial in certain use cases.

Whether or not selling access to massive frontier models is a viable business model, or trillion-dollar valuations for AI companies can be justified... These questions are of a completely different scale, with near-term implications for the global economy.


Fedora also offers immutable distros which are (I've heard) much more user-friendly than Nix. Sure you can make a hacky pseudo-immutable workflow on a mutable distro but that's literally more effort for a worse result.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: