Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I discovered a vulnerability in Mac OS X that would allow a unprivileged user to keylog every user on the system (CVE-2007-0724), I let Apple know, then kept quiet until they fixed the issue. It took them 11 and a half months to fix. They thanked me in the security update note, and I now how a CVE on my resume. Was silence the most morally correct action? To this day, I am still unsure.


I've never thought to put CVE-IDs I'm credited for reporting on my resume. Is that...a thing? Do tech employers (outside of security consultancies) even know what a CVE-ID is?


What else should a person put on their resume (beyond job experience) when applying for security roles? Patents? Education? Open Source Projects? I would think that CVE-IDs would certainly lend color, and probably credibility to the resume of someone applying for a security position, particularly if the CVE-ID (which has some amount of peer review) was associated with something interesting or relevant to the position being applied for.


It helped land me a firmware development position at what was then a fortune 500 company.


I'm at Basho (we make the Riak database) in a non-security role, and I pay attention to CVE-IDs. They'll stick with me and make me remember a resume better, and improve somebody's chances of advancing to an interview, and give me something to ask about during an interview.


If your resume has them, the others don't, and the boss is knowledgeable or curious ... then it could help you stand out. I would think it would look very good for a developer position, especially at a place that makes high reliability and/or network facing products.


Unless there were an easy workaround which you could disclose only with disclosing the rest of the problem - yes, it was.


This assumes something that I don't believe is defendable: that bad people wanting to install keyloggers on these systems did not already have knowledge of this vulnerability (or, even simpler, that one would seriously believe that they would be unable to find this vulnerability without splicer having told them about it, as somehow he had unique knowledge of the system). Just because I don't have a way to protect myself from harm does not imply that I am somehow better off not knowing that people can harm me.


It does not matter - 99.99% of the users have no means or expertise to detect or disable a keylogger if somebody installed it. Unless there is a tool that would allow them do do it, disclosing the vulnerability is useless for them. On the other hand, if such tool exists, it can probably be published without full disclosure. You are not better of not knowing, you are the same (unless you are in highly qualified 0.01%), however various criminals - who do not have time/expertise to find vulnerability themselves, but can exploit known ones - immediately gain edge over you once it is published. For example, I myself, without knowing existing vulnerabilities, probably could not in reasonable time find one, but given a good disclosure of one, I probably could, using existing tools and with some luck, produce a working exploit for many of existing types of holes.

So the problem is that irresponsible disclosure does not help victims at all, and does help criminals. The only positive thing in irresponsible disclosure is that if vendor is unreasonably slow with issuing patches, and exploits are already known to be in the wild, then the harm is minimal, and disclosure can raise the priority of the fix. But absent this knowledge, responsible disclosure is almost always better for the users.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: