I’ve started publishing my “extension security basics” article series. First article takes apart a very simple extension. Two more are already written, quite a few more are planned.
I’ve added another data point to https://github.com/palant/chrome-extension-manifests-dataset, now we can compare extension manifests from November 2021 to those from August 2022.
One finding: Manifest V3 usage went up from 3.5% to 16.6% in that time. But >80% extensions are still on Manifest V2.
It seems that for some people Hacker News is not toxic enough. So they will create those invite-only walled gardens where they can bash other people’s articles without being disturbed. If the author then tries to put things right: “Sorry, you are not allowed to participate. 🤷♂️” #rant
Got an email notification saying that #Remembear is shutting down. Too bad, that was one of the few vendors who handled security issue reports quickly and professionally.
And apparently, the answer is: I compile with my own allocator. This way I can not only log all allocations, I can also ignore deallocations to make sure no two data structures share a memory location. Rather smelly code but it works.
Unbelievable but true: I have it all ironed out. All the implicit input/output buffers, all the timing issues, and even most of the OS-specific weirdness when it comes to searching a process’ memory for leftover secrets. 🥳
Got this one figured out: io-streams crate gives me unbuffered input, so no secrets will be leaked via buffers here. Now to the next secret leak…
Well, I have a dilemma: reading a password from stdin via usual I/O leaves that password in memory, due to a libc buffer I think. Reading the password properly via rpassword does not but it isn’t compatible with integration tests (the ones searching memory for secrets). Heh…
Ok, I’m now using the secrecy crate in my #rustlang code to make sure no secrets are left in memory. I have automated memory searching and it finds the secrets nevertheless. And now the trick question: how do I figure out which code path left them there? 😅
So malware is abusing Developer Mode to install extensions that share their extension ID with a legitimate Google extension. Pretty clever, and I guess it’s exactly why Mozilla decided against allowing developers install permanent extensions.
@ro While this seems to be a duck, I have no idea what kind of duck it is. It appears to be some domesticated duck but the two of them were there on their own.
@ro Ah, so Unicode also includes letters found in a single manuscript? Font creators certainly appreciate this. 😅
Somehow I didn’t see the Soatok vs. Bugcrowd story (https://twitter.com/SoatokDhole/status/1536765180645974016) when it happened. Frankly, it doesn’t surprise me the least. Bug bounty platforms currently have two goals:
1. Reduce the effort for vendors
2. Reduce PR damage from disclosures
Keeping vendor’s customers secure is not on the list.
The first one means that vulnerability reports usually aren’t handled by developers but rather by staff of the bug bounty platform who have no deeper knowledge of the product. Hence they must rely on researcher to exactly prove the impact, ideally via a proof of concept.
This is great for the company, they have to “waste” less developer time on handling security reports. Instead this approach shifts the burden onto security researchers. But hey, they are being paid for it, right?
The customers are the ones losing out of course. Bug bounty platforms disincentivize reporting issues which might be considered minor. They also disincentivize reporting out of the box issues. So bug bounty reports will concentrate on obvious targets. https://palant.info/2017/10/04/observations-on-managed-bug-bounty-programs/
And of course bug bounty platforms will retaliate against “unauthorized” disclosure. Their customer is the vendor after all, and they hired them to avoid bad PR. The vendor doesn’t like being called out if they dismiss a valid vulnerability or take years to fix.
For reference: these are largely the reasons why I stopped using bug bounty platforms years ago. I do security research with the goal of making users more secure. For that I need to evaluate the entire attack surface, and disclosure deadlines aren’t optional either.
Today I got reminded that 14 years ago I asked Mozilla to disable dynamic code execution in browser extensions: https://bugzilla.mozilla.org/show_bug.cgi?id=477380. 13 years ago this request was rejected because “too late to fix.” Then 10 years ago Chrome devs did it by means of CSP. 🤷♂️
Wladimir Palant, software developer and security researcher, browser extensions expert. He/him
A Mastodon instance for info/cyber security-minded people.