Hacker Newsnew | past | comments | ask | show | jobs | submit | simonask's commentslogin

I've also never had any trouble with NVIDIA on the desktop. I think most issues people have are on laptops, which have odd hybrid/dual GPU setups, and which exercise suspend/hibernate much more aggressively.

That's a good point that I hadn't considered. I've never had a laptop with Nvidia, I probably subconsciously avoided those dual GPU setups as they sounded hacky and I never really needed fast 3D on a laptop.

FWIW I have an Asus Zephyrus G14 and the dual graphics cards works pretty well in Linux in hybrid mode. It's pretty cool, certain things (games) run on the dedicated Nvidia GPU. Everything else runs on the built in AMD GPU.

I'm guessing it's because the laptops are popular enough that there's a dedicated group of people that make it work [0].

I'm still on X11, dunno what the story is like with Wayland though.

[0] https://asus-linux.org/


As far as I know dual graphics laptops are a pain no matter the OS and chips.

The one sample i know of first hand is an amd/nvidia laptop that never obeys the settings about which GPU to use. In Windows.


It’s not really about OS differences - as the GP said, games don’t typically use a lot of OS features.

What they do tend to really put a strain on is GPU drivers. Many games and engines have workarounds and optimizations for specific vendors, and even driver versions.

If the GPU driver on Linux differs in behavior from the Windows version (and it is very, very difficult to port a driver in a way that doesn’t), those workarounds can become sources of bugs.


Newtypes aren’t hacks, they’re perfectly acceptable in my opinion. Especially if you’re willing to also use a crate like `derive_more`.

In my experience using newtypes like this causes a constant shuffle between the original type and the newtype.

If a library exposes Foo and I wrap it in MyFoo implementing some trait, I need to convert to MyFoo everywhere the trait is needed and back to Foo everywhere the original type is expected.

In practice this means cluttering the code with as_foo and as_myfoo all over the place.

You could also impl From or Deref for one direction of the conversion, but it makes the code less clear in my opinion.


One strategy I like is to declare “view” types for serialization and deserialization, because you’re going to be doing that anyway if your serialized format is meant to be compatible across versions anyway.

Serde also comes with a bunch of attributes and features to make it easy to short-circuit this stuff ad hoc.

I know this only solves the serialization use case, but that seems to be where most people run into this.


honestly in my experience it rarely matters (if you care about stable APIs) as most types you want to have at an API boundary are written (or auto generated) by you

this leaves a few often small types like `DateTime<Utc>`, which you can handle with serde serialization function overwrite attributes or automatic conversions not even needing new types (through some of this attributes could be better designed)

serde is not perfect but pretty decent, but IMHO the proc macros it provides need some love/a v2 rewrite, which would only affect impl. code gen and as such is fully backward compatible, can be mixed with old code and can be from a different author (i.e. it doesn't have the problem)

Anyway that doesn't make the problem go away, just serialization/serde is both the best and worst example. (Best as it's extremely wide spread, "good enough" but not perfect, which is poison for ecosystem evolution, worst as serialization is enough of a special case to make it's best solution be potentially unusable to solve the generic problem (e.g. reflections)).


Other than duck-typed languages (and I count Go as basically that), which languages actually provide this feature?

AFAIK, it’s not really very common to be able to extend foreign types with new interfaces, especially not if you own neither.

C++ can technically do it using partial specialization, but it’s not exactly nice, and results in UB via ODR violation when it goes wrong (say you have two implementations of a `std::hash` specialization, etc.). And it only works for interfaces that are specifically designed to be specialized this way - not for vanilla dynamic dispatch, say.


> Other than duck-typed languages (and I count Go as basically that), which languages actually provide this feature?

There are only like 3 significant languages with trait-based generics, and both the other ones have some way of providing orphan instances (Haskell by requiring a flag, Scala by not having a coherence requirement at all and relying on you getting it right, which turns out to work out pretty well in practice).

More generally it's an extremely common problem to have in a mature language; if you don't have a proper fix for it then you tend to end up with awful hacks instead. Consider e.g. https://www.joda.org/joda-time-hibernate/ and https://github.com/FasterXML/jackson-datatype-joda , and note how they have to be essentially first party modules, and they have to use reflection-based runtime registries with all the associated problems. And I think that these issues significantly increased the pressure to import joda-time into the JVM system library, which ultimately came with significant downsides and costs, and in a "systems" language that aims to have a lean runtime this would be even worse.


Sure, the `chrono` library in Rust had essentially the same problem.

Scala is interesting. How do they resolve conflicts?


> Scala is interesting. How do they resolve conflicts?

If there are multiple possible instances you get a compilation error and have to specify one explicitly (which is always an option). So you do have the problem of upgrading a dependency and getting a compilation error for something that was previously fine, but it's not a big deal in practice - what I generally do is go back to the previous version and explicitly pass the instance that I was using, which is just an IDE key-combo, and then the upgrade will succeed. (After all, it's always possible to get a conflict because a library you use added a new method and the name conflicted with another library you were using - the way I see it this is essentially the same thing, just with the name being anonymous and the type being the part that matters)

You also theoretically have the much bigger problem of using two different hashing/sorting/etc. implementations with the same datastructure, which would be disastrous (although not an immediate memory corruption issue the way it could be in Rust). But in practice it's just not something I see happening, it would take a very contrived set of circumstances to encounter it.


Interesting, thanks for explaining.

> (although not an immediate memory corruption issue the way it could be in Rust)

Just to note, all of Rust's standard container types are designed such that they guarantee that buggy implementations of traits like `Hash` and `Ord` do not result in UB - just broken collections. :-)


C# isnt a duck type language (well, you can do that via dynamic keyword, but I don't know who would do that typically).

Most integration libraries in Nuget (aka c#'s cargo) are AB type libraries.

E.g. DI Container: Autofac Messaging Library: MediatR Integration: MediatR.Extensions.Autofac.DependencyInjection

There are many examples of popular libraries like this in that world.


C# does not support adding interfaces to foreign types. It does support extension classes to add methods and properties to a type, but nothing that adds fields or changes the list of interfaces implemented by a type. Rust supports this as well, because you can use traits this way.

Dependency injection is a popular solution for this problem, and you can do that as well in Rust. It requires (again) that the API is designed for dependency injection, and instead of interfaces and is-a relationships, you now have "factories" producing the implementation.


I mean… Sure, if we’re just making stuff up, a compiler that can magically understand whatever you were trying to do and then do that instead of what you wrote, I guess that’s a nice fantasy?

But out here on this miserable old Earth I happen to think that Rust’s errors are pretty great. They’re usually catching things I didn’t actually intend to do, rather than preventing me from doing those things.


> But out here on this miserable old Earth I happen to think that Rust’s errors are pretty great. They’re usually catching things I didn’t actually intend to do, rather than preventing me from doing those things.

As it happens, you are replying to the person who made Rust's errors great! (it wasn't just them of course, but they did a lot of it)


I bow to them and thank them for their service!

I think there are legitimate criticisms of Rust that fall in this category, but the orphan rule ain’t it.

In most other languages, it is simply not possible to “add” an interface to a class you don’t own. Rust let’s you do that if you own either the type or or the interface. That’s strictly more permissive than the competition.

The reasons those other languages have for not letting you add your interface to foreign types, or extend them with new members, are exactly the same reasons that Rust has the orphan rule.


AI has exactly the biases of its training data.

This take is bafflingly unscientific. You are spewing pure unadulterated ideology here - a particularly ugly one too.

Yes, behavioral genetics is the climate science of the left. If there are PhDs and university departments studying it, I'm not gonna be someone who sticks there head in the sand for the sake of their flavor of political identity.

They are studying it, while you are drawing your own conclusions from a cursory understanding.

Your claim that “wealthy people are more intelligent” is so incredibly problematic on so many levels, starting with the fundamental methodological problem that we do not have any reliable way to actually measure intelligence. Add to this the extremely obvious fact that some rich people have no other qualifications than being born with a trust fund, and some poor people face extreme obstacles to reaching their potential.

This world view is total, utter dogshit, completely removed from reality.


> Your claim that “wealthy people are more intelligent” is so incredibly problematic on so many levels

If you want people to listen to you then don't advertise your biases like this. Call it "wrong" instead of "problematic", calling it "problematic" shows you don't want to see that result and not just that you think its wrong.


No, I used it as a euphemism for “fascist”, which is a somewhat stronger descriptor than simply “wrong”.

The notion that the deeply oppressive status quo is somehow fair and just is one of the worst post-hoc rationalizations that purportedly smart people can fall into. Open your eyes.


What studies support your critiques?

The amount of effort I’m willing to dedicate to counter wild Social Darwinism is very low. Please just do the bare minimum and go to Wikipedia.

Because we know better.

There's a word for that. Hubris.

Also helps to not be a teenager.

He was fat in his 20s and then got fit, not teenager.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: