From a product uptake perspective, I could suggest that since a user is still building trust when they begin use - to only require as few permissions as needed. I'd punt that profile update requirement out personally for another method later.
An example might be when a user has used your app for N sessions, or after N months.
They should prompt the user for permission when they use a feature that requires it, explain why, and allow them to cancel if desired. Have seen this pattern used many times elsewhere.
As a gent born and raised in Texas, and has never seen the show - I am pleasantly surprised to see these comments about how popular WTR was internationally. If I had been asked to bet, I would have lost money on this one.
From my memory from the 90s: Baywatch, X-Files, that speaking car one, Beverly Hills 90210, Ninja Turtles. Some dumb sitcom named Step by Step? edit: oh and ALF
Oh and Married with Children, but it was always very late night and I was not allowed to watch it.
And our teacher always played us ET on VHS. (and that dog playing basketball.)
If you like MwC, look up episodes of Unhappily Ever After on Youtube, it's sort of the second-generation MwC. Same sort of humour but taken even further, I can easily re-watch Unhappily but MwC is sort of a once-you've-seen-it...
I've got the impression that the big US exports are ones that play into big American stereotypes, e.g WTR, Baywatch, Friends. Not even that they see these shows and get programmed with these stereotypes, but that they have these stereotypes (Texas, California, NYC) and shows like this feed their imaginations and give them detail.
Exported media is weird. Like the huge proportion of British/BBC output (usually period, but also often detective in a way redolent of Christie) that is made primarily for export to foreign consumers who think of British upper-class culture as aspirational.
Walker, Texas Ranger and Baywatch were both created by non-network studios as syndicated shows, they weren’t prime time network shows. The budgets for syndicated content is a lot lower than network produced content.
The rights to air these sorts of shows are dirt cheap compared to Friends or Seinfeld, so it makes sense that cheap syndicated garbage like Walker, Texas Ranger and Baywatch were popular internationally, the rights were cheap.
There is US exported media that just randomly becomes popular in a specific demographic. Case in point: Adventures of Ford Fairlane, a flick with Andrew Dice Clay that got a razzie the year it came out. IIRC it got a cult following in Norway because the voice over was done by a popular radio DJ.
It was a syndicated show, the goal is to license it to as many companies as possible. It was never a network TV show like Seinfeld, those syndication rights are way more expensive than created for syndication shows like WTR.
If you do, you could protect yourself with a sell stop below $17.25... because if it breaks that on weekly candles, next are $14 and $10. Or you could buy some calls instead when the volatility calms down. If you do it now, the volcrush could happen even if you're correct.
Not investment advice, do you own research. I'm just someone on the Internet.
Thank you for explaining this, I had always wondered how a carrier could tell a device was tethered if a router was not passing on tethered device details.
Another way to do it is to look for requests to domains that phones never access but desktops/laptops often do. Windows Update is the most common, but you could probably do apt package repositories or whatever.
Agree - yet, security researchers and our wider community also needs to recognize that vulnerabilities are foreign to most non-technical users.
Cold approach vulnerability reports to non-technical organizations quite frankly scare them. It might be like someone you've never met telling you the door on your back bedroom balcony can be opened with a dummy key, and they know because they tried it.
Such organizations don't kmow what to do. They're scared, thinking maybe someone also took financial information, etc. Internal strife and lots of discussions usually occur with lots of wild specualation (as the norm) before any communication back occurs.
It just isn't the same as what security forward organizations do, so it often becomes as a surprise to engineers when "good deed" seems to be taken as malice.
Maybe they should simply use some common sense? If someone could and would steal valuables, it seems highly unlikely that he/she/it would notify you before doing it.
If they would want to extort you, they would possibly do so early on. And maybe encrypt some data as a "proof of concept" ...
But some organizations seem to think that their lawyers will remedy every failure and that's enough.
> If someone could and would steal valuables, it seems highly unlikely that he/she/it would notify you before doing it.
after* doing it. Though I agree with your general point
Note the parts in the email to the organization where OP (1) mentions they found underage students among the unsecured accounts and (2) attaches a script that dumps the database, ready to go¹. It takes very little to see in access logs that they accessed records that they weren't authorized to, which makes it hard to distinguish their actions from malicious ones
I do agree that if the org had done a cursory web search, they'd have found that everything OP did (besides dumping more than one record from the database) is standard practice and that responsible disclosure is an established practice that criminals obviously wouldn't use. That OP subsequently agrees to sign a removal agreement, besides the lack of any extortion, is a further sign of good faith which the org should have taken them up on
¹ though very inefficiently, but the data protection officer that they were in touch with (note: not a lawyer) wouldn't know that and the IT person that advises them might not feel the need to mention it
You want frontier models to actively prevent people from using them to do vulnerability research because you're worried bad people will do vulnerability research?
Not at all. I was suggesting if an account is performing source code level request scanning of "numerous" codebases - that it could be an account of interest. A sign of mis-use.
This is different than someones "npm audit" suggesting issues with packages in a build and updating to new revisions. Also different than iterating deeply on source code for a project (eg: nginx web server).
From a product uptake perspective, I could suggest that since a user is still building trust when they begin use - to only require as few permissions as needed. I'd punt that profile update requirement out personally for another method later.
An example might be when a user has used your app for N sessions, or after N months.
reply