Follow

My my...the amount of breathless speculation about the Google Play store banning Fediverse apps has become viral.

@ScottMortimer
I'm far less interested in the speculation than the fact it's happening 🤷‍♂️

@jfinkhaeuser
Actually, nothing more than a warning has happened so far. I am sure it will get resolved in the next days.

qoto.org/@freemo/1047652888632

@ScottMortimer
I read the post, yes. Of course they warn first. But a warning without a way to avoid the ban mentioned in it is, at best, a dick move. At worst it's intentional. I get why people speculate. I'm not sure it's helpful, but it's very understandable.

@ScottMortimer
I mean, an organisation with the size and maturity in these matters as Google has all the institutional knowledge available to them not to fuck up communications here, not to send out these warnings by mistake, etc. Accidents can always happen. But at this kind of business, they have the means to prevent them. Unlike, say, a five people startup.

That leaves three options: a) they don't have control, b) they don't care, or c) it was intentional. Any of these is worrisome.

@jfinkhaeuser
Ironically, I think the five person startup would have handled it better. Google and other giant tech companies rely almost completely on automation for most things. This may just be a case of scripted scans of the Play Store repositories finding bits of offending code and automatically generating these warnings. Hopefully, once this is brought to the attention of a human employee it will get resolved in a reasonable fashion.

@ScottMortimer
I guess we're looking at this from slightly different angles. I'm wearing the hat in which I do tech due diligence. And in that situation, you can forgive a small startup for not having solved everything, and even for not having everything on their radar, because resources are tight.

Google's doesn't have that excuse. So if they send out anything unwarranted, there's a different reason at play.

@ScottMortimer
I agree it's probably largely automated, and I also agree that there's a good chance that once a human is involved, it will get resolved. But to first send out a warning without a human check, causing this kind of response, indicates that their process isn't really covering this situation.

Again, the question is why when lack of resources cannot be the reason?

The startup would likely be better here, but worse in the general case due to it's lack of resources for automation.

@jfinkhaeuser

The cold, logical is answer is that they just don't care enough to bother doing otherwise. The possible negative effects of disenfranchising a small community of people isn't on their monitoring screens and hopefully someone remedies that.

Sign in to participate in the conversation
Infosec Exchange

A Mastodon instance for info/cyber security-minded people.