Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've used both Colt and apache commons math in a machine learning setting. It was mostly the better handling of sparse matrix ops that made apache a couple of times faster than colt (due to hash-based vs. sorted list vectors) It was pretty much the same on dense linear algebra. Intel MKL, eigen, uBlas can be better, but I haven't done the benchmarks to prove this either way.


While we're on the subject, I'm curious if you've played with matrix-toolkits-java / netlib-java. I'm considering switching to it. I've had good experiences with the Lawson-Hanson non-negative least squares that comes with netlib.


The netlib-java project is very convenient because it allows you to use optimized native netlib libs, and transparently fall back to the f2j (jvm bytecode) library when native libs are available.

The java version of netlib produced by f2j is unfortunately the unoptimized "reference implementation", so it's not terribly performant for large matrices.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: