Hacker Newsnew | past | comments | ask | show | jobs | submit | angiolillo's commentslogin

> Adding filters so that developers only look at actionable tickets would be much more sane.

That's a reasonable approach, but I don't understand how it's any more or less sane than autoclosing them with a stale label.

Whether these sorts of bugs are "open but stale" or "closed because stale" seems like it depends on whether the project defines "closed" as "no work planned" or "fixed", which both seem valid.

Either way these bugs will be hidden from developer dashboards but still available in the database so there's no practical difference, you just need to make sure everyone is on the same page about the meaning of "closed".


To some people "open" means "not fixed" whereas to others it means "more work planned". I've worked on projects with both interpretations and it's fine as long as everyone is on the same page.

> It costs nothing (except pride?) to leave "Issues (1)" if there is indeed an Issue.

In our case we omit bugs we couldn't reproduce from the issues list due to practicality, not pride -- our software has tens of thousands of unreproducible bugs and having them show up in reports would drown out planned work.

And it's not like anyone deleted or locked the unreproducible bugs, they are either tracked as "open but unreproducible" or "closed because unreproducible". Either way they're still in the database in case more information comes along, but still filtered out of the vast majority of dashboards.


> What's the harm in keeping the bug open?

Conversely, what's the harm in closing the bug? (As long as you don't lock or delete it, I agree that's bad.)

People focused on the work often interpret "open" to mean "requires work" and "closed" to mean "no planned work" in which case keeping an unreproducible bug open is dishonest because it falsely implies that someone might continue to work on it.

Whereas people focused on the problem often interpret "open" to mean "not fixed" and "closed" to mean "fixed" in which case closing an unreproducible bug is dishonest because it falsely implies that it's no longer a problem.

Neither seems right or wrong as long as everyone on the project agrees which interpretation you're using.


> If you want to do it, do it. If you don't, then don't.

Three of the "four ways to lose" described in the article are significant harms inflicted on parties besides the bettors themselves. One cannot avoid these harms by not directly gambling.


> Normally I cringe at doomsday preppers

The doomsday preppers with a scarcity mindset and a bunker full of tin cans and military surplus make for good TV, but plenty of "preppers" don't look like that.

They also have a well-stocked pantry but focus more on strengthening the community to absorb shocks. Things like mutual aid networks, skill sharing, tool libraries, noodling with GMRS/HAM/LoRa comms, going on camping trips, helping each other out with kitchen gardens, and general community resilience. This approach doesn't cover every disaster scenario but it seems like a more pleasant (and realistic) option for the ones it does cover. And if nothing truly bad happens then at least they got to spend time doing things like gardening with their neighbors.

Being able to have offline Wikipedia, maps, and educational tools would be useful in either case but potentially even more so as a community resource because there are only so many skills each individual can learn.


I think another difference is that the Cambrian explosion of web apps vying for user attention meant that many web users had experience using both poorly-designed web apps as well as well-designed web apps and could gravitate towards the latter.

Whereas many Notes applications were internal so there was no "survival of the fittest" and the UI toolkit was passable at best. As a result, many Notes users never experienced a well-designed Notes app.


DARPA projects from more than a decade ago (VSAM/WAMI for arial platforms like Gorgon Stare) used arial imagery to capture ground shadows for gait tracking purposes.

From chatting with some of the researchers many years ago my understanding is that it usually wasn't accurate enough for unique identification and the gait shadow was dependent on shoe type and clothing, so a persistent gait shadow database wouldn't have been useful. But it could be correlated with ground-based surveillance for identification, for example person A and B were identified on a ground-based security camera entering a building, then gait tracking could be used to monitor where they went after they left the building even if they avoided ground-based security cameras after that point.


> I think most painters are happy that they don't have to go out and grind up snails to make their own purple pigment

People who loved mixing colors enough to become experts may have been disappointed when their hard-won skills were rendered obsolete by the march of progress.

There are some aspects of my work that are enjoyable on their own and others that I only do because they're necessary overhead to achieve a desired result. I appreciate technology that eliminates the latter but lament technology that eliminates the former.

So when AI obsoletes yet another human skill I suspect a lot of the wildly different emotional responses are dependent on whether someone considers the skill being obsoleted more "enjoyable" or "necessary overhead".


> What is an "excuse" for a layoff, exactly?

By "excuses for layoffs" I suspect what they meant was that there was an pre-existing desire to reduce headcount and RTO was used under the expectation that some percentage of employees would quit voluntarily so that the company can avoid going through the relatively more costly process of laying them off.

Of course the downside of this approach is that the company has less control over which employees leave, which may result in them losing the employees who have the best alternatives.


Gotcha. There was definite over hiring that happened during covid so some of this was a return to normal I think.

Plenty of companies don't "need to exist". A company exists because someone decided to start it (usually to make some money) and lasts until someone decides to end it (usually when it stops making money).

If you're asking why Palantir (and Salesforce, Jira, etc) continue to make money despite not having any novel or complex technologies, my experience has been that these are not prerequisites for solving the vast majority of business problems. Usually network effects, customer relationships, brand identity, user interface, inertia, etc are all more important than the technology.

It is not always easy for a technologist to admit, but companies whose ongoing success is primarily due to some sort of (non-UX) technological superiority are the exception rather than the rule.


This discounts the value of user experience, which people will pay a premium for.

A good design is valuable, and this applies to business processes as well.

How would you design the user experience of constructing a submarine?

Good design IS technological superiority.


> This discounts the value of user experience, which people will pay a premium for.

The people making purchasing decisions at this level aren't the ones using it and don't care one whit about UX.

That isn't to say that it isn't valuable, but it's basically a non-factor. The technology itself is a non-factor. Everything is about connections, buzz words and pretty slide decks.


They literally do, since the people making purchasing decisions are usually the ones that ranked up through a system they used and know the intricacies of, including all the pain points.

Randos don't become general managers.


People who actually care about the day to day pain points of jira also do not become general managers

As someone who used to teach UX grad courses, I'm happy you feel that way!

But I'm unsure why you feel that my response pointing out that a product's user interface is typically a more important factor in success than the product's underlying technologies was discounting the value of user experience?

> Good design IS technological superiority.

Hmm, I was attempting to respond to someone who wrote "It feels like a big pile of nothing... Big fat database schemas with big fat CRUD atop and layers of snazzy sparklines" which seemed to dramatically undervalue good schemas, CRUD implementations, or sparklines as "nothing". So to contrast those I used "technical superiority" as a catchall for the sort of challenging technical implementations that some developers lionize. Does that make sense? Is there a different term you'd suggest for that? For now I've changed to "(non-UX) technological superiority".


> This discounts the value of user experience, which people will pay a premium for.

Have you ever used jira? They are very much not selling that thing on the basis of UX.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: