Hacker Newsnew | past | comments | ask | show | jobs | submit | just-ok's commentslogin

> that “extra” is going to be somewhat school and school system dependent

Is it? That’s news to me. I grew up thinking it was standard to just have 5 grade points for honors and AP classes. I guess the nuance comes into the bar for a class being “honors,” vs. AP which has a more consistent definition given the standardized exam?


A significant, and growing, number of schools (many of them independent and high end) are no longer offering AP/Honors classes. Which means that you now don’t have a level measure for comparing GPA.

To the original point though it’s one of many reasons why GPA + Test scores isn’t really a standard metric to be used on their own for merit based admissions. It’s really just a bar after which you have to take additional factors into account.


Interesting idea, but look at it from the applicant’s perspective: you’d have to front like $50,000 to apply to just 5 schools (if you call $10k the average price for a semester). Even if you solved the financial aid question here, admissions is a numbers game for students, usually, so getting accepted to more than one school would dig the student loan debt hole that much deeper across the board.


> you’d have to front like $50,000 to apply to just 5 schools (if you call $10k the average price for a semester). Even if you solved the financial aid question here, admissions is a numbers game for students, usually, so getting accepted to more than one school would dig the student loan debt hole that much deeper across the board.

Perhaps insurance companies could create an insurance product to insurance the applicant against the case that he gets admitted at many colleges and thus has to pay many, many times the semester fee (or application fee).


Even getting admitted to two colleges would be financial crippling to a majority of applicants.

Insurance premiums would be a significant fraction of the average tuition, which would be beyond the reach of many.

The effect of the proposed system would be that most people would just apply to one school. If rejected they would try another next year, if they haven’t given up on college, and so on.


This would result in lots of schools moving to a rolling admission process with monthly (or even weekly) notification cycles. Students would then apply serially to several schools.

Of course, there's no way to get schools to all require a deposit, and even if there were schools would give fee-waivers to low-income students (giving them an advantage over middle-class kids).


They sorta do: each comment on a discussion starts a thread you can reply to, unlike on issues where you have to keep quoting each other to track a topic if there’s more than one. It still sucks, especially since long threads are collapsed and thus harder to ctrl-f or link a reply, but it’s something.


So does OpenAI (last I checked) which I sadly learned the hard way.


I lost my free credits they were giving away a couple years back, but when you pay they make it fairly clear they will expire. I see no issues here from any provider doing this, as long as it is made clear.


I have many issues with providers doing that. The reason they take your money and give you credits in exchange is because that gives them lower credit card processing fees (one larger transaction vs. many smaller ones). If they're going to do that to make things cheaper for them, then they should let me use those credits whenever I want, no matter how far into the future I want to use them.

If not, they should refund them. They absolutely should refund them if I close my account.

If I'm going to give you money, I expect something in return for every cent of it. That's just basic decency.


> The reason they take your money and give you credits in exchange is because that gives them lower credit card processing fees (one larger transaction vs. many smaller ones).

That is not the main reason why they do this. The main reason why they do this is to get easy access to what essentially amounts to free capital, effectively it is an interest free loan from you to them.


Even though they do make it clear, one should still have an issue with providers doing it. The reason is that various providers like Uber, Lyft, Namecheap, and some cloud vendors, etc. do not do it at all. As such it is a fairly unethical practice.


> Even though they do make it clear

> As such, it is theft, plain and simple

What?


These aren't contradictory. If I say I'm going to steal your phone and then I do, notifying you beforehand doesn't absolve me of criminal liability. One could argue that it's a contractual arrangement, but there's a well established doctrine of unconscionability in contract law. It's especially applicable when contracts are unilateral rather than being negotiated between peers.


Signal has a desktop app. Unless you mean phone number, in which case I get where you’re coming from, though I think they allow just usernames now.


Usernames are only for discoverability. You still need a phone number.


You need to install the phone app to be able to activate it. If you've been offline on the desktop app for too long, you need the phone app again to re-active the desktop app. I've also noticed a lot of issues synchronizing messages between computers using the desktop app, without having the app on a phone.

They allow usernames as an alternative to sharing your phone number with other people. You still need a phone number (and the phone app) to create and activate an account.

It's very phone-first.


Nothing to lose besides thousands more lives, of course.


No real security guarantees also means tens of thousands of lives lost when Russia gets back into gear and tries to take Ukraine a second time.


Second time? More like fourth time. First there was Crimea, then Donbas, then the current invasion.


You are correct, I should have said "again".


Why would Russia bother going through this again?


The same reason they did it the first time.


Because after bribing Trump with the ability to brag that he won the USA 500B of minerals, they will be able to march in without any US interference...


That doesn’t answer the “why?” at all. To what end?


Are you asking people to read the mind of Putin? Or speculate? It seems reasonable to believe that at the very least Putin wants the territory he attempted to take when he first invaded, Kyiv et.al.

Why would that change if he hasn't?


Because both Ukraine and Russia have changed? Ukraine is war torn, deeply in debt, and no longer provides the strategic benefit to Russia it might’ve in ‘22. Russia’s economy and populace needs to recover from being war-oriented.

They have their land bridge to Crimea now, and if I had to speculate, they’d be happy with a neutered neighbor that can’t join NATO, essentially a populated DMZ. I can’t see what benefit in wanting to take Ukraine on again after the dragged out meat grinder it was this time around.


So speculation then. Here's some more: because it won't be a dragged out meat-grinder if he has a puppet US administration/political party.


> Ukraine is war torn, deeply in debt, and no longer provides the strategic benefit to Russia it might’ve in ‘22.

Expanded access to the Black Sea and natural gas/minerals were and still are very important to Russia. Aside from these, a total victory would allow Putin to cement himself as a conqueror in Russian history books.


The deal was a sham -- it came with no guarantees.


Don’t victim blame.


Ukraine has lost massive amounts of lives, territory, and foreign funding under his leadership; Zelensky effectively has zero negotiating power.

Where are the voices that simply want the war to end so people stop dying? It’s easy to say bully this and ally that from the comfort of your office while hundreds of thousands of people die in a strip of land most “supporters” couldn’t point out on a map. At this point there’s a collective ego tied to the outcome more than there is any care for the actual people involved.


Some things are worth fighting for you quisling.


As long as someone else does the fighting, right? Last I checked, the majority of Ukrainians themselves want a quick end to the war.

[1]: https://news.gallup.com/poll/653495/half-ukrainians-quick-ne...


I want a million dollars. Very much!

That doesn't mean I'll accept your proposal to rob a bank.


What a dogshit poll. There were three options given:

1. Ukraine should continue fighting until it wins the war.

2. Ukraine should seek to negotiate an ending to the war as soon as possible.

3. Don't know/Refused.

"Ukraine should surrender unconditionally" and "Ukraine should negotiate permanent security guarantees" and "Ukraine should fight its way into a better negotiating position" are all in the same bucket. This is maliciously bad poll design.


Seems like a massive buried lede in an “outperforms the previous SoTA” paper.


How does generating images with 90% less pixels count as beating DALL•E?


There are plenty of models around that will reliably upscale an image. That's not the hard part.


Even the latest AI up scalers will have a 384x384 look pretty terrible when put against e.g SDXL @ 1024x1024 native. It's just too little to work on.


I think they're referring to specific benchmarks


It’s not better than o1. And given that OpenAI is on the verge of releasing o3, has some “o4” in the pipeline, and Deepseek could only build this because of o1, I don’t think there’s as much competition as people seem to imply.

I’m excited to see models become open, but given the curve of progress we’ve seen, even being “a little” behind is a gap that grows exponentially every day.


When the price difference is so high and the performance so close, of course you have a major issue with competition. Let alone the fact this is fully open source.

Most importantly, this is a signal: openAI and META are trying to build a moat using massive hardware investments. Deepseek took the opposite direction and not only does it show that hardware is no moat, it basically makes fool of their multibillion claims. This is massive. If only investors had the brain it takes, we would pop this bubble alread.


Why should the bubble pop when we just got the proof that these models can be much more efficient than we thought?

I mean, sure, no one is going to have a monopoly, and we're going to see a race to the bottom in prices, but on the other hand, the AI revolution is going to come much sooner than expected, and it's going to be on everyone's pocket this year. Isn't that a bullish signal for the economy?


Chances are the investors who put in all that capital would rather invest it in the team that has the ability to make the most of it. Deepseek calls into question whether OpenAI, Anthropic or Google are as world class as everyone thought a few days ago.


It doesn’t call it into question- they’re not. OpenAI has been bleeding researchers since the Anthropic split (and arguably their best ones, given Claude vs GPT-4o). While Google should have all the data in the world to build the best models, they still seem organizationally incapable of leveraging it to the their advantage, as was the case with their inventing Transformers in the first place.


> While Google should have all the data in the world to build the best models

They do have the best models. Two models made by Google share the first place on Chatbot Arena.

[1] https://lmarena.ai/?leaderboard


I'm not sure placing first in Chatbot Arena is proof of anything except being the best at Chatbot Arena, it's been shown that models that format things in a visually more pleasant way tend to win side by side comparisons.

In my experience doing actual work, not side by side comparisons, Claude wins outright as a daily work horse for any and all technical tasks. Chatbot Arena may say Gemini is "better", but my reality of solving actual coding problems says Claude is miles ahead.


I think this is the correct take. There might be a small bubble burst initially after a bunch of US stocks retrace due to uncertainty. But in the long run this should speed up the proliferation of productivity gains unlocked by AI.


I think we should not underestimate one aspect: at the moment, a lot of hype is artificial (and despicable if you ask me). Anthropic says AI can double human lifespan in 10 years time; openAI says they have AGI behind the corner; META keeps insisting on their model being open source when they in fact only release the weights. They think - maybe they are right - that they would not be able to get these massive investments without hyping things a bit but deepseek's performance should call for things to be reviewed.


Based on reports from a16z the US Government likely wants to bifurcate the top-tier tech and bring it into DARPA, with clear rules for how capable anything can be that the public will be able to access.

I consider it unlikely that the new administration is philosophically different with respect to its prioritization of "national security" concerns.


> Anthropic says AI can double human lifespan in 10 years time;

That's not a crazy thing to say, at all.

Lots of AI researchers think that ASI is less than 5 years away.

> deepseek's performance should call for things to be reviewed.

Their investments, maybe, their predictions of AGI? They should be reviewed to be more optimistic.


I am a professor of Neurobiology, I know a thing or two about lifespan research. To claim that human lifespan can be doubled is crazy per se. To claim it can be done in 10 years by a system that does not even exist is even sillier.


But it took the deepseek team a few weeks to replicate something at least close to o1.

If people can replicate 90% of your product in 6 weeks you have competition.


Not only a few weeks, but more importantly, it was cheap.

The moat for these big models were always expected to be capital expenditure for training costing billions. It's why these companies like openAI etc, are spending massively on compute - it's building a bigger moat (or trying to at least).

If it can be shown, which seems to have been, that you could use smarts and make use of compute more efficiently and cheaply, but achieve similar (or even better) results, the hardware moat bouyed by capital is no longer.

i'm actually glad tho. An opensourced version of these weights should ideally spur the type of innovation that stable diffusion did when theirs was released.


o1-preview was released Sep 12, 2024. So DeepSeek team probably had a couple of months.


> Deepseek could only build this because of o1, I don’t think there’s as much competition as people seem to imply

And this is based on what exactly? OpenAI hides the reasoning steps, so training a model on o1 is very likely much more expensive (and much less useful) than just training it directly on a cheaper model.


Because literally before o1, no one is doing COT style test time scaling. It is a new paradigm. The talking point back then, is the LLM hits the wall.

R1's biggest contribution IMO, is R1-Zero, I am fully sold with this they don't need o1's output to be as good. But yeah, o1 is still the herald.


I don't think Chain of Thought in itself was a particularly big deal, honestly. It always seemed like the most obvious way to make AI "work". Just give it some time to think to itself, and then summarize and conclude based on its own responses.

Like, this idea always seemed completely obvious to me, and I figured the only reason why it hadn't been done yet is just because (at the time) models weren't good enough. (So it just caused them to get confused, and it didn't improve results.)

Presumably OpenAI were the first to claim this achievement because they had (at the time) the strongest model (+ enough compute). That doesn't mean COT was a revolutionary idea, because imo it really wasn't. (Again, it was just a matter of having a strong enough model, enough context, enough compute for it to actually work. That's not an academic achievement, just a scaling victory.)


But the longer you allocate tokens to CoT, the better it at solving the problem is a revolutionary idea. And model self correct within its own CoT is first brought out by o1 model.


Chain of Thought was known since 2022 (https://arxiv.org/abs/2201.11903), we just were stuck in a world where we were dumping more data and compute at the training instead of looking at other improvements.


CoT is a common technique, but scaling law of more test time compute on CoT generation, correlates with problem solving performance is from o1.


> even being “a little” behind is a gap that grows exponentially every day

This theory has yet to be demonstrated. As yet, it seems open source just stays behind by about 6-10 months consistently.


> It’s not better than o1.

I thought that too before I used it to do real work.


Yes. It shines with real problems.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: