It can work but generally you either have performance issues or less library support or run into more operational pains. I mainly work on building library tooling over tensorflow. Losing support for good number of operations is not something that I could reasonably consider. Also cloud amd gpu support is near non-existent and most of my company's compute relies on the cloud. t4 pricing is very nice on gcp.
Like AMD rocm exists for ml, but is very far behind cuda in support/libraries for ml. I don't think any moderate investment in AMD gpus is likely to catch up, so I'm pretty pessimistic of AMD being a good solution for ML usage anytime in the next several years.
I think a hobbyist who already owns an AMD card and wants to do basic ML is only situation I'd consider it. Any other situation and I'd strongly advise against it.
Sorry, I think you misunderstood. This is installing both types of card in the same computer: one for desktop/Wayland (AMD), and the other for ML/CUDA (Nvidia).
Like AMD rocm exists for ml, but is very far behind cuda in support/libraries for ml. I don't think any moderate investment in AMD gpus is likely to catch up, so I'm pretty pessimistic of AMD being a good solution for ML usage anytime in the next several years.
I think a hobbyist who already owns an AMD card and wants to do basic ML is only situation I'd consider it. Any other situation and I'd strongly advise against it.